After serving as an expert witness in over a dozen AI-related matters in the past eighteen months, I can say with confidence: 2025 is the year AI litigation moved from speculative to structural. The cases being filed today aren't exploratory—they're building the doctrinal framework that will govern AI accountability for decades. Attorneys who aren't tracking these developments are already behind.

This article examines the most consequential AI litigation trends shaping the field right now, and what they mean for practitioners on both sides of the aisle.

The Copyright Wars: New York Times v. OpenAI and Its Progeny

The New York Times Co. v. Microsoft Corp. and OpenAI case remains the gravitational center of AI copyright litigation. Filed in December 2023, it has since survived multiple motions to dismiss and is now deep into discovery—a phase that has proven technically explosive. The core question—whether training a large language model on copyrighted material constitutes fair use—has no clean precedent, and the Southern District of New York is navigating genuinely novel territory.

What makes this case technically fascinating from an expert witness perspective is the discovery battle over training data composition. OpenAI has resisted full disclosure of its training datasets, arguing trade secret protections. The Times has countered that without knowing what the model ingested, no fair use analysis is possible. The court's forthcoming rulings on this discovery dispute will have implications far beyond this single case—they will determine whether AI companies can maintain opacity about their training data in litigation.

Meanwhile, the Thomson Reuters v. ROSS Intelligence matter reached a jury verdict finding that ROSS's use of Westlaw content to train a legal AI tool was not fair use. While the case involved a relatively narrow AI application, the reasoning is being cited in briefs across the copyright-AI landscape. The verdict signaled that courts may be skeptical of the "transformative use" argument when the AI's output competes directly with the source material's market.

The technical question at the heart of every AI copyright case is deceptively simple: does a model "memorize" or does it "learn"? The answer, as any machine learning engineer will tell you, is both—and that ambiguity is what makes these cases so difficult to resolve.

AI Product Liability: The Emerging Frontier

While copyright captures headlines, the more doctrinally significant development in 2025 is the emergence of AI product liability as a coherent cause of action. Several cases filed in the past year are testing whether traditional product liability frameworks—design defect, manufacturing defect, failure to warn—can accommodate AI systems that learn, adapt, and behave probabilistically.

The most closely watched is the growing body of litigation around AI-assisted medical diagnosis tools. At least three cases currently in federal court involve patients who allege they were harmed by diagnostic AI systems that either missed critical findings or generated false positives that led to unnecessary procedures. The legal challenge is establishing the standard of care: when an AI tool is FDA-cleared and used within its intended parameters, does a failure constitute a product defect or a clinical judgment error?

As an expert witness, I've been asked to opine on the technical architecture of these systems in ways that directly map to legal elements. Was the training data representative of the patient population being served? Were known failure modes documented and communicated to clinicians? Did the system's confidence calibration meet industry standards? These are technical questions with direct legal consequences, and they require expert testimony that bridges both domains.

Autonomous Vehicle Litigation Matures

The autonomous vehicle space continues to generate significant litigation, but the nature of the claims has evolved. Early cases focused on spectacular failures—the Uber ATG fatality in Tempe, the Cruise incidents in San Francisco. The current wave is more nuanced: cases involving Level 2 and Level 3 systems where the vehicle's automation and the driver's attention intersect in ambiguous ways. The central expert witness question in these matters is often one of human factors engineering: did the system's interface design create foreseeable confusion about the allocation of driving responsibility?

General Motors' decision to wind down Cruise's robotaxi operations in late 2024, following a series of incidents and regulatory actions, has also spawned shareholder derivative actions alleging that the company's board failed to adequately oversee AI safety risks. This represents an important new vector: AI governance failures as corporate liability.

Algorithmic Bias and Employment Discrimination

The EEOC's continued focus on algorithmic hiring tools has moved from guidance to enforcement. Several consent decrees in 2024 and early 2025 have established that employers bear responsibility for discriminatory outcomes produced by third-party AI hiring platforms, even when the employer did not design or train the algorithm. The legal theory—that the employer is the "user" of a tool that produces disparate impact—is well-established under Title VII, but its application to AI systems raises novel questions about what constitutes adequate validation and monitoring.

From a technical standpoint, these cases turn on whether the AI vendor conducted appropriate adverse impact analyses before deployment, and whether the employer independently validated the tool against its own applicant population. I've reviewed several of these systems in litigation, and the gap between what vendors claim in their marketing materials and what their technical documentation actually supports is, frankly, striking.

The EU AI Act: Extraterritorial Implications for U.S. Litigation

The EU AI Act, which entered its enforcement phase in stages beginning in 2024, is already influencing U.S. litigation strategy in ways that many practitioners haven't fully appreciated. The Act's risk classification framework—particularly its designation of certain AI applications as "high-risk" with mandatory conformity assessments—is being cited in U.S. complaints as evidence of an emerging international standard of care.

This is a significant development for expert witnesses. When a plaintiff's counsel argues that a defendant's AI system would be classified as "high-risk" under the EU AI Act and that the defendant failed to conduct the assessments that European law would require, they are effectively asking the court to consider an international regulatory framework as relevant to the domestic standard of care. Whether courts accept this framing will be one of the defining questions of AI litigation in the coming years.

What This Means for Practitioners

For attorneys engaging with AI litigation—whether on the plaintiff or defense side—several practical imperatives emerge from the current landscape:

First, technical discovery is the new battleground. The most consequential motions in AI cases are increasingly about access to training data, model architecture documentation, validation studies, and internal testing records. Attorneys who don't know what to ask for—or how to evaluate what they receive—are at a structural disadvantage.

Second, expert witness selection is critical and must happen early. AI systems are not intuitive. A model's behavior cannot be understood from its outputs alone. Retaining a qualified expert who can review the system's architecture, training methodology, and deployment context is not a luxury—it's a prerequisite for competent representation.

Third, the standard of care is being written now. Every case that settles, every motion that's decided, every expert report that's filed is contributing to the emerging consensus about what constitutes reasonable AI development and deployment. The attorneys and experts shaping these early cases are, in a very real sense, writing the rules.

The Criterion AI provides expert witness services and litigation support for matters involving artificial intelligence, machine learning, and algorithmic decision-making. For a confidential consultation on an active or anticipated matter, contact us at criterion@thecriterionai.com or call (617) 798-9715.