Five years ago, most litigators could go an entire career without encountering artificial intelligence in a case. That era is over. AI is now at the center of disputes in employment discrimination, intellectual property, product liability, criminal sentencing, securities fraud, healthcare malpractice, and insurance bad faith. And when AI is at the center of a dispute, you need someone who can explain it.

Not someone who read a blog post about ChatGPT. Not someone who took a weekend course on "AI for lawyers." You need a genuine expert: someone who has built these systems, understands their failure modes, and can translate that knowledge into testimony that a judge and jury will actually follow.

Finding that person is harder than it sounds. The field is new, the pool of qualified experts is small, and the stakes are high. This guide covers everything you need to know.

What Does an AI Expert Witness Actually Do?

An AI expert witness serves the same fundamental role as any expert witness: they help the trier of fact understand technical subject matter that falls outside ordinary knowledge. But AI cases present unique challenges that make this role especially critical.

Most judges did not study machine learning. Most jurors have never trained a neural network. When your case hinges on whether a predictive algorithm exhibited disparate impact, or whether a company's AI system was defectively designed, you need someone who can bridge the gap between the technical reality and the legal questions at hand.

In practice, an AI expert witness might do any of the following:

  • Evaluate an AI system's design and methodology. Was the training data representative? Was the model architecture appropriate for the task? Were there known failure modes that the developer ignored?
  • Analyze outputs for bias or error. Did the system produce systematically different results across protected classes? What was the false positive rate? The false negative rate?
  • Assess industry standards of care. Did the company follow accepted practices for model validation, testing, and deployment? Or did they cut corners?
  • Explain technical concepts to non-technical audiences. This is often the most important skill. An expert who cannot simplify a complex concept without distorting it is not useful to you.
  • Rebut the opposing expert's analysis. AI cases frequently become battles of experts. Your expert needs to identify weaknesses in the other side's methodology and explain them clearly.

When Do You Need an AI Expert Witness?

The short answer: whenever AI or algorithmic decision-making is a material issue in your case. But let's be more specific.

Employment discrimination cases involving algorithmic hiring tools. This is one of the fastest-growing areas. Companies use AI to screen resumes, conduct video interviews, and rank candidates. When those tools produce disparate outcomes, plaintiffs need experts who can audit the algorithm and identify the source of bias. The EEOC has been increasingly active here, and cases like EEOC v. iTutorGroup (2023) have established that automated screening tools can violate Title VII.

Criminal cases involving predictive algorithms. The landmark case is State v. Loomis, 881 N.W.2d 749 (Wis. 2016), where the Wisconsin Supreme Court upheld the use of the COMPAS risk assessment tool in sentencing. But the court also noted that the tool's proprietary nature raised due process concerns. Defense attorneys in these cases need experts who can challenge the algorithm's validity, its error rates, and the appropriateness of its use in individual sentencing decisions. In State v. Pickett (2024), the defense successfully challenged ShotSpotter's AI-based gunshot detection system, with expert testimony demonstrating the system's troubling false positive rates in certain acoustic environments.

Product liability cases involving autonomous systems. Self-driving cars, medical diagnostic AI, robotic surgery systems. When an autonomous system causes injury, the technical question of what went wrong requires expert analysis. Was it a sensor failure? A training data gap? A design choice that prioritized one outcome over another?

Intellectual property disputes. AI-generated content, AI-assisted inventions, trade secret misappropriation involving machine learning models. These cases require experts who understand both the technical aspects of how models work and the specific ways that training data, model weights, and architectures can constitute (or infringe upon) intellectual property.

Regulatory compliance and enforcement. The EU AI Act, the proposed federal AI frameworks, state-level AI regulations. As the regulatory landscape grows, companies face enforcement actions and private litigation related to AI governance failures. Expert witnesses can opine on whether a company's AI practices met applicable regulatory standards.

Daubert, Frye, and Qualifying Your AI Expert

Before your expert can testify, they need to survive a challenge. In federal courts and the majority of state courts, that means satisfying the Daubert standard. In a handful of states (notably New York, California, and Illinois for some purposes), the older Frye standard still applies.

Under Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993), the trial court acts as a gatekeeper, evaluating whether the expert's testimony is based on sufficient facts, reliable principles and methods, and a reliable application of those methods to the case. The court considers factors including whether the theory or technique can be (and has been) tested, whether it has been subjected to peer review, its known error rate, and its general acceptance in the relevant scientific community.

For AI expert witnesses, Daubert challenges raise distinct issues. The field moves fast. Peer-reviewed publications may lag behind industry practice by years. "General acceptance" can be difficult to establish when the relevant community spans computer science, statistics, and domain-specific fields. And the methodology question is especially tricky: an expert analyzing a proprietary AI system may need to explain why their analytical approach is valid even without access to the full source code or training data.

The best AI expert witnesses anticipate Daubert challenges from the moment of engagement. They document their methodology, cite peer-reviewed foundations, and build their analysis to withstand rigorous cross-examination.

Under Frye v. United States, 293 F. 1013 (D.C. Cir. 1923), the standard is narrower: the expert's methodology must be "generally accepted" in the relevant scientific community. For AI cases in Frye jurisdictions, this means your expert needs to demonstrate that their analytical approach reflects mainstream practice in the AI/ML community, not a fringe methodology or a purely novel technique.

In practice, the qualification battle often comes down to credentials and experience. A Daubert challenge is harder to win against an expert who holds a PhD in machine learning, has published in top-tier venues (NeurIPS, ICML, JMLR), has hands-on industry experience building AI systems, and has testified before. It is much easier to challenge someone whose AI expertise consists of a few online courses and a consulting practice that started last year.

Technical AI Experts vs. Legal and Policy AI Experts

Not all AI experts are interchangeable. There is a meaningful distinction between two categories, and choosing the wrong one can undermine your case.

Technical AI experts are practitioners. They have built machine learning models. They understand the mathematics of gradient descent, the architecture of transformer networks, the mechanics of training data curation and model evaluation. They can examine a system's code, analyze its training data, replicate its outputs, and identify where and why it fails. These are the experts you need when the core question is "how does this system work, and did it work correctly?"

Legal and policy AI experts focus on governance, ethics, regulation, and standards of care. They may come from law, public policy, or interdisciplinary AI ethics programs. They can opine on whether a company's AI deployment met industry best practices for transparency, fairness, and accountability. They can speak to regulatory compliance, organizational governance structures, and the evolving landscape of AI law. These are the experts you need when the core question is "should this system have been deployed this way, and did the organization act responsibly?"

Many cases require both. A product liability case involving a defective AI medical device might need a technical expert to explain the system's failure mode and a policy expert to testify about regulatory compliance and industry standards. Retaining both is more expensive, but trying to force one expert to cover both domains often backfires on cross-examination.

The most valuable experts sit at the intersection. They have the technical depth to understand the system and the communication skills and policy awareness to connect their analysis to the legal framework. These experts are rare, and they are in high demand.

What to Look For in an AI Expert Witness

Here is a practical checklist. Not every expert will check every box, but the more they check, the stronger your position.

Hands-on AI/ML Experience

This is non-negotiable. Your expert must have actually built, trained, and deployed machine learning systems. Academic knowledge alone is not enough. An expert who has only studied AI from a theoretical perspective will struggle to analyze real-world systems and will be vulnerable on cross-examination. Look for industry experience at technology companies, research labs, or startups where they worked directly with production AI systems.

Publication Record

Peer-reviewed publications in recognized venues signal credibility. Top-tier conferences in AI include NeurIPS, ICML, ICLR, AAAI, and ACL. Journals like the Journal of Machine Learning Research (JMLR), Artificial Intelligence, and IEEE Transactions on Pattern Analysis and Machine Intelligence carry weight. Publications demonstrate that the expert's work has been vetted by their peers, which directly supports Daubert qualification.

But be careful about quantity over quality. An expert with three highly cited papers in directly relevant areas is more valuable than one with fifty publications in tangentially related fields.

Prior Testimony Experience

An expert who has testified before knows the courtroom. They understand how to handle cross-examination, how to speak to a jury, and how to navigate the procedural requirements of expert testimony. First-time expert witnesses can be excellent, but they require more preparation. If your case is high-stakes and the timeline is tight, prior testimony experience matters.

Ask for a list of prior cases, including which side retained them, whether they were deposed, and whether they testified at trial. Look for any instances where their testimony was excluded under Daubert. A single exclusion is not necessarily disqualifying (the grounds matter), but a pattern of exclusions is a red flag.

The Ability to Explain Complex Concepts Simply

This is the skill that separates a good expert from a great one. AI is inherently complex. Neural networks, gradient descent, attention mechanisms, loss functions, feature engineering: these are not concepts that most judges and jurors encounter in daily life. Your expert needs to make these concepts accessible without oversimplifying them to the point of inaccuracy.

Test this during your initial consultation. Ask them to explain a key concept in your case as if they were talking to a smart non-technical person. If they drown you in jargon, they will drown the jury too. If they use clear analogies and build understanding step by step, you have found someone who can actually move the needle at trial.

Independence and Credibility

The best expert witnesses are not advocates. They are educators. They call it like they see it, even when that means delivering bad news to the retaining party. An expert who has a reputation for always supporting whichever side hired them will be exposed on cross-examination. Look for someone who has credibility with both plaintiffs and defendants, and who is willing to tell you early if the technical facts do not support your position.

Common Engagement Types

AI expert witness engagements vary widely in scope and cost. Understanding the different types will help you budget and plan effectively.

Consulting Expert

A consulting expert advises the legal team but does not testify. Their work is typically protected as attorney work product. This is the most common starting point: you bring in an AI expert to help you understand the technology, evaluate the strengths and weaknesses of your case, develop your technical theory, and prepare for depositions. Many engagements stay at this level if the case settles before trial.

Consulting engagements are also useful for pre-filing case evaluation. Before committing to litigation, you can retain an expert to assess whether the AI system at issue actually has the defect or bias your client claims. This can save significant resources if the technical analysis does not support the case.

Report Writing

When the case moves toward trial, the expert typically prepares a written report under Federal Rule of Civil Procedure 26(a)(2)(B) (or the state equivalent). The report must include a complete statement of all opinions, the basis and reasons for them, the facts and data considered, any exhibits, the expert's qualifications, a list of all publications in the preceding ten years, a list of cases in which the expert testified, and a statement of compensation.

AI expert reports are often highly technical. They may include statistical analyses, code reviews, model evaluations, and visualizations of model behavior. The report needs to be both scientifically rigorous and accessible to the court. This is a difficult balance, and it takes time. Budget accordingly.

Deposition Support

Your expert may be deposed by opposing counsel. Deposition preparation is critical. The expert needs to be comfortable with their report, prepared for technical cross-examination, and aware of the common traps that opposing counsel uses with expert witnesses. If your expert has never been deposed, invest extra time in preparation.

Your expert can also help you prepare to depose the opposing party's AI expert. They can identify technical weaknesses, suggest lines of questioning, and help you understand the opposing expert's report at a technical level that you could not reach on your own.

Trial Testimony

Trial testimony is the highest-stakes engagement. Your expert takes the stand, presents their opinions, and faces cross-examination. In AI cases, this often involves the use of demonstrative exhibits: diagrams of model architecture, visualizations of data distributions, step-by-step explanations of how the algorithm processes inputs and produces outputs. The best AI expert witnesses are comfortable with technology in the courtroom and can use visual aids effectively.

Preparation for trial testimony typically involves multiple sessions with the legal team, mock examinations, and iterative refinement of demonstrative materials. The investment is significant, but it pays dividends when the expert delivers clear, compelling, and unshakable testimony.

Lessons from the Case Law

A few cases illustrate the stakes involved in AI expert testimony.

In State v. Loomis, 881 N.W.2d 749 (Wis. 2016), the defendant challenged the use of the COMPAS recidivism prediction tool in his sentencing. The Wisconsin Supreme Court upheld the use of COMPAS but imposed significant limitations, ruling that risk scores could not be used to determine whether the offender is incarcerated or to determine the severity of the sentence. The case turned in part on expert testimony about the algorithm's methodology and limitations. The defense argued that COMPAS's proprietary nature made it impossible to meaningfully challenge, a point that resonated with the court even as it ultimately upheld the sentence.

In State v. Pickett (2024), the defense challenged the reliability of ShotSpotter, an AI-based acoustic gunshot detection system used by law enforcement in dozens of American cities. Defense experts testified about the system's error rates, its tendency to produce false positives in urban environments with complex acoustic properties, and the lack of independent peer review of its methodology. The case highlighted the importance of having an expert who can evaluate not just the AI system's outputs but its fundamental design assumptions and the conditions under which those assumptions break down.

In Houston Federation of Teachers v. Houston Independent School District, the plaintiffs challenged EVAAS, a value-added statistical model used to evaluate teachers and make termination decisions. The case, which settled in 2017, centered on expert testimony that the model's scores were unreliable, non-reproducible, and influenced by factors outside teachers' control. The expert analysis demonstrated that the same teacher could receive dramatically different scores depending on which students were assigned to their classroom, undermining the model's use in high-stakes employment decisions.

These cases share a common theme: the quality of expert testimony directly shaped the outcome. In each case, the ability (or inability) to clearly explain the AI system's mechanics, limitations, and potential for error was the decisive factor.

Red Flags: When to Walk Away

Not every AI expert is worth retaining. Watch for these warning signs:

  • No hands-on technical experience. If their AI knowledge comes entirely from reading papers, attending conferences, or consulting, they will struggle under Daubert challenge and on cross-examination.
  • Willingness to reach any conclusion you want. An expert who asks what opinion you need before reviewing the evidence is a hired gun, and opposing counsel will make the jury see that.
  • Inability to explain their own methodology. If they cannot clearly articulate how they arrived at their conclusions, they will not be able to defend those conclusions under questioning.
  • Resume padding. Watch for inflated claims: listing "AI experience" that is actually basic data analytics, claiming authorship on publications where they were one of fifty co-authors, or overstating their role in notable projects.
  • Excessive advocacy. The expert's job is to educate, not to argue your case. If they sound like an advocate during your first conversation, they will sound like one on the stand, and that destroys credibility.

How to Structure the Engagement

A few practical tips on the business side of retaining an AI expert witness.

Start early. AI cases require significant technical analysis. If you wait until the expert disclosure deadline is approaching, you will either rush the analysis (producing a weaker report) or miss the deadline entirely. Engage your expert as a consultant early in the case, even before you know whether you will need testifying expert testimony.

Define the scope clearly. AI expert engagements can expand rapidly. The expert may want to analyze the full training dataset, replicate the model, conduct independent testing, and review all related documentation. This is thorough, but it is also expensive. Work with your expert to define a scope that is sufficient to support their opinions without bankrupting your client.

Ensure access to data and systems. AI expert analysis often requires access to the AI system itself, its training data, its source code, and its outputs. If the opposing party controls the system, you may need to negotiate access through discovery. Build this into your discovery plan early.

Budget for iteration. The expert's initial analysis may reveal unexpected findings that require additional investigation. Build flexibility into your budget and timeline. The best expert reports are the product of iterative analysis and revision, not a single pass.

Coordinate with other experts. In complex cases, your AI expert may need to work with other experts (statisticians, domain specialists, damages experts). Facilitate communication between them early to ensure consistency in their analyses and opinions.

The Bottom Line

Hiring the right AI expert witness is one of the most consequential decisions you will make in an AI-related case. The technology is complex, the legal frameworks are still evolving, and the stakes for your client are real. A strong AI expert does more than just testify. They shape your understanding of the case from the outset, help you build a coherent technical narrative, and deliver that narrative persuasively at trial.

The wrong expert, someone without genuine technical depth, without courtroom experience, or without the ability to communicate clearly, can undermine months of legal work in a single afternoon on the stand.

Take the time to find the right person. Vet their credentials rigorously. Test their communication skills. And engage them early enough to do the job properly.

The Criterion AI provides expert witness services and litigation support for matters involving artificial intelligence, machine learning, and algorithmic decision-making. Our team combines deep technical expertise with real-world courtroom experience, helping attorneys navigate the technical complexities of AI cases from initial case evaluation through trial testimony. For a confidential consultation on an active or anticipated matter, contact us at info@thecriterionai.com or call (617) 798-9715.