The Persian Gulf conflict that escalated in early March 2026 is the first major military engagement in which autonomous weapons systems, AI-driven intelligence analysis, and machine learning targeting algorithms are operating at scale on all sides. The United States, Israel, and Iran are each deploying AI systems that generate data with potential evidentiary significance: targeting recommendations, threat assessments, battle damage estimates, civilian casualty predictions, and after-action analyses.

This data will reach American courtrooms. It is not a question of if. Veterans will file disability claims supported by AI-generated battlefield data. Families of civilians killed in strikes will file tort claims against defense contractors. Congressional investigations will subpoena AI targeting logs. War crimes inquiries will demand access to the algorithmic decision-making that selected targets. And in every one of these proceedings, the central evidentiary question will be the same: can machine-generated evidence be admitted, and if so, under what standard?

The Current Evidentiary Framework Is Inadequate

The Federal Rules of Evidence were written for a world in which evidence was generated by humans. Documents were authored by people. Photographs were taken by people. Expert opinions were formed by people. The rules governing authentication (Rule 901), hearsay (Rules 801 through 807), and expert testimony (Rules 702 through 706) all assume a human actor at some point in the chain.

Autonomous weapons systems break this assumption. When an AI targeting system analyzes satellite imagery, identifies a target, calculates a confidence score, and recommends a strike, the resulting data was not generated by a human in any meaningful sense. A human may have designed the system. A human may have approved the strike. But the analysis itself, the core evidence, was machine-generated.

Current rules handle this poorly. Is AI targeting data hearsay? It is an out-of-court statement offered for the truth of the matter asserted, which is the textbook definition. But the "declarant" is a machine, and the hearsay rules assume human declarants. Is it a business record under Rule 803(6)? Perhaps, but the business records exception requires a "qualified witness" who can testify to the record-keeping process, and no human witnessed the AI's internal decision-making process. Is it expert opinion under Rule 702? The AI is not a witness, cannot be cross-examined, and cannot explain its reasoning in the way Daubert requires.

What FRE 707 Would Require

The proposed Federal Rule of Evidence 707, currently under consideration by the Advisory Committee on Evidence Rules, would create a dedicated framework for machine-generated evidence. Its core requirements, as currently drafted, include:

Reliability showing. The proponent must demonstrate that the AI system is reliable for the purpose for which its output is being offered. This includes evidence of the system's accuracy rate, error rate, validation methodology, and known limitations. For military AI systems, this would require disclosure of the system's performance in testing environments and, where available, its performance in operational conditions.

Transparency requirement. The proponent must disclose sufficient information about the system's methodology to allow the opposing party to challenge its reliability. This does not require disclosure of source code in every case, but it does require enough information for a qualified expert to evaluate the system's approach. For classified military systems, this creates an immediate tension with national security interests.

Human oversight showing. The proponent must demonstrate what human oversight, if any, was involved in generating the evidence. Was a human in the loop? On the loop? Or was the system fully autonomous? The level of human oversight affects the weight the evidence should receive, and the rule requires the proponent to be transparent about it.

Error rate disclosure. The proponent must disclose the system's known or estimated error rate. For targeting systems, this means disclosing how often the system misidentifies targets, generates false positives, or fails to identify relevant factors. This requirement is analogous to the error rate inquiry under Daubert, but applied specifically to machine outputs rather than expert methodology.

The Military AI Evidence Problem

Military AI systems present unique challenges under any evidentiary framework, including FRE 707.

Classification barriers. The most capable military AI systems are classified. Their architectures, training data, performance specifications, and operational parameters are state secrets. FRE 707's transparency requirement collides directly with classification. The government will invoke the state secrets privilege. Plaintiffs will argue that the privilege cannot be used to shield evidence of wrongdoing. Courts will be forced to navigate the Classified Information Procedures Act (CIPA) and its interaction with the new evidentiary rule, a question that has no precedent.

Battlefield conditions. AI systems tested in controlled environments may perform differently under battlefield conditions: degraded communications, adversarial interference, corrupted sensor data, and time pressure. An AI targeting system that achieves 99% accuracy in testing may perform at 85% accuracy in a contested electromagnetic environment. FRE 707's reliability requirement must account for the gap between testing and operational performance, and the party offering the evidence may not have access to the operational performance data.

Multi-system complexity. Modern military operations involve multiple AI systems operating in coordination: intelligence analysis systems, targeting systems, battle damage assessment systems, and command-and-control systems. The output of one system becomes the input to another. When a targeting recommendation is based on intelligence analysis that was itself AI-generated, the evidentiary chain becomes recursive. FRE 707 will need to address whether each system in the chain must independently satisfy the rule's requirements, or whether the chain can be evaluated holistically.

Adversarial manipulation. In an active conflict, adversaries may attempt to manipulate AI systems through adversarial techniques: spoofing sensor data, planting decoy targets, or exploiting known vulnerabilities in the AI's perception systems. If AI-generated evidence is offered in court, the opposing party must be able to challenge whether the system was subject to adversarial manipulation at the time the evidence was generated. This requires technical expertise that is rare in the legal community and may be classified on the military side.

The Gulf conflict is generating AI evidence in real time. The legal system has perhaps two to three years before this evidence reaches federal courtrooms in volume. FRE 707 must be ready.

The Cases That Will Test the Framework

Several categories of litigation will bring military AI evidence into American courtrooms.

Veterans' disability claims. Veterans exposed to combat conditions during AI-assisted operations may file disability claims with the VA and, on appeal, in federal court. AI-generated battlefield data (exposure records, threat assessments, environmental monitoring) may be offered as evidence of the conditions that caused the veteran's injury. The VA's adjudicatory framework is not designed for machine-generated evidence, and the Federal Circuit will need to address FRE 707's applicability in this context.

Civilian casualty tort claims. Under the Federal Tort Claims Act and the Alien Tort Statute, families of civilians killed in strikes may bring claims against the United States or defense contractors. AI targeting data will be central evidence: did the system correctly identify the target? Did it account for civilian presence? What was its confidence score? The government's defense will rely on the same AI data the plaintiffs seek to challenge, creating a scenario in which both parties are arguing about the reliability of the same machine-generated evidence.

Defense contractor disputes. Contracts for military AI systems include performance specifications and acceptance criteria. If a system fails to perform as specified in operational conditions, the government may bring breach of contract claims against the contractor. The contractor may argue that the system met specifications under testing conditions and that operational failures were caused by factors outside the system's design parameters. Both sides will offer AI-generated performance data, and FRE 707 will govern its admissibility.

Congressional oversight and war powers. While not strictly litigation, congressional investigations into the use of autonomous weapons will generate legal questions about the admissibility of AI evidence in legislative proceedings. If Congress subpoenas AI targeting logs, the executive branch may assert executive privilege or classification. The resulting litigation over subpoena enforcement will require courts to address the evidentiary status of AI-generated military data.

What Practitioners Should Do Now

Track FRE 707's progress. The Advisory Committee is still revising the proposed rule. Practitioners who anticipate handling military AI evidence should submit comments during the public comment period and track the rule's development through the Judicial Conference.

Build technical expertise. Military AI evidence cases will require expert witnesses who understand both the AI technology and the military operational context. Experts who can bridge the gap between computer science and military operations will be in high demand. Begin identifying and retaining these experts now, before the case pipeline fills.

Prepare for classification fights. If your case involves classified military AI systems, prepare early for the intersection of CIPA, FRE 707, and the state secrets privilege. This is uncharted territory, and the litigation strategy must account for the possibility that key evidence will be available only in classified proceedings or not at all.

Preserve evidence aggressively. Military AI data has uncertain retention periods. Systems are updated, logs are overwritten, and operational data may be classified and transferred to restricted archives. If you are contemplating litigation involving military AI evidence, send preservation demands early and be specific about the types of AI-generated data you need.

The Gulf conflict is generating more AI evidence per day than any prior military operation generated in total. The legal system is not ready for what is coming. The practitioners and experts who prepare now will be the ones who shape the framework for decades to come.

The Criterion AI provides expert witness services and litigation support for matters involving artificial intelligence, machine learning, and algorithmic decision-making. For a confidential consultation on an active or anticipated matter, contact us at info@thecriterionai.com or call (617) 798-9715.