What Litigators Need to Know About Proposed FRE 707 on “Machine-Generated Evidence”

 
 
 

On June 10, 2025, the Federal Rules Advisory Committee approved publication of proposed Federal Rule of Evidence 707.

 

If adopted, the proposed rule would require that when parties seek to introduce machine-generated evidence—for example, results produced by an AI program—without a qualified expert, the evidence must meet the same reliability standards that apply to expert testimony under Rule 702(a)–(d).

The proposal is now open for public comment through February 16, 2026, with virtual hearings scheduled for January 2026. So far, no comments about proposed rule 707 have been submitted. That means trial lawyers and litigation teams have a real chance to shape this rule before it’s finalized.

What the Rule Would Do

Rule 707 is aimed at stopping a party from “sneaking in” AI outputs or other complex machine outputs by having a lay witness click “Export” or “Print Report” and then present the results to the jury, without any scrutiny of how reliable the underlying process really is.

But the rule would not apply to simple, everyday tools—like digital thermometers, scales, or other basic measuring devices that courts and juries already rely on without expert testimony.

Why Some Say We Should Wait

Limited Case Law

Since ChatGPT was introduced in late 2022, only a handful of federal cases have dealt squarely with machine-generated evidence. Courts have admitted some computer outputs when they were just copies of existing data (like text messages pulled from a phone) but have applied stricter standards when the software was drawing conclusions or predictions (like DNA software estimating how likely a sample came from a defendant).

That’s not much case law. Courts seem capable of using the current rules—702, 901/902 (authentication), and 403 (balancing test)—to handle these questions. A new rule might be premature.

Risk of Overreach

The draft rule could create fights over whether routine digital outputs need a full expert foundation. Imagine costly battles about whether a routine software printout in a product defect case or a workplace exposure study must jump through 702 hoops, even if it just reports raw data.

Technology Is Moving Too Fast

By the time a new rule is adopted, machine learning tools and large language models may have advanced so far that the rule feels outdated—or worse, unhelpful. ChatGPT, Claude, Gemini and others have come a long way in less than three years. Courts adjudicating these issues on a case-by-case basis will have more flexibility in framing their approaches to admissibility as current models evolve and improve, and new models emerge.

Civil Litigation Examples to Watch

Here are some practical areas where proposed Rule 707 could come into play:

  • Product Liability:
    AI-powered failure analysis reports (e.g., software predicting why a consumer product overheated). Is the program just reporting temperatures, or drawing conclusions about “design defect”?

  • Occupational Disease and Injury:
    Workplace exposure modeling (e.g., software estimating airborne particle levels or predicting cumulative exposure risk). If the system generates predictions, Rule 707 might demand expert-level reliability proof.

  • Medical Malpractice:
    AI diagnostic tools that flag possible conditions (e.g., an algorithm highlighting a lung nodule as “likely malignant”). Courts will need to decide if such outputs can come in directly, or whether an expert must explain the basis.

  • Complex Commercial Litigation:
    LLM-based contract analytics (e.g., a program classifying agreements as “likely containing antitrust risk” or predicting likely damages exposure). Outputs like these involve judgment calls, not just data copying.

Two Common Questions

1. Does Rule 707 imply that LLMs have reached “AGI” (artificial general intelligence)?

No. The rule does not assume AI is “like a human.” It simply says: if a machine is doing the same kind of analysis a human expert would do, the output should be tested for reliability the same way. This is about fairness, not about recognizing AI as human-like.

2. Why apply Rule 702 if a lay person can read the output?

Because what matters is the hidden process. A paralegal, nurse, or IT tech might hit “Generate Report” and testify about what’s printed, but if the software itself is making inferences (predicting cancer, allocating liability, estimating exposure), the real question is whether the process is reliable. Without 702 scrutiny, unreliable or biased machine inferences could reach the jury untested.


Our Take: Don’t Rush, or Narrow the Rule

The Advisory Committee is right to worry about parties bypassing Rule 702. But with so little case law and such rapid technological change, it may be wiser to wait and let courts develop precedent under existing rules—or, at minimum, narrow Rule 707 to cover only inferential or predictive outputs, not simple descriptive ones.

Want to weigh in?

This proposal matters for every litigator handling expert-heavy cases. Here’s how to get involved:

Note: Although the rule has been out for several months, no comments have been submitted as of this post in September 2025. That means trial lawyers and litigation teams have a real chance to weigh in and shape how courts will handle AI evidence for years to come.

 

With tools like esumry, deposition transcript analysis is fast and strategic. Tag testimony, assess credibility, and get ahead of how the other side will use the record—before they do.

 
 
 
14-Day Free Trial

Try esumry out!

We invite you to see firsthand how AI and automation can revolutionize your deposition analysis.

Get a 14-day free trial of esumry today and start transforming your litigation practice for the future.

 

 
James Chapman