Access the Litigation Dashboard

Track AI litigation trends, Daubert rulings, and regulatory developments. Free for legal professionals.

Please complete all fields.

We respect your privacy. Your information is used solely to improve our services.

Litigation Intelligence

AI Litigation Dashboard

Tracking the cases, rulings, and regulatory developments shaping the intersection of artificial intelligence and the law.

AI Legal Pulse — Powered by Grok

Intelligence briefing on AI litigation sentiment, regulatory developments, and trending legal discourse. Updated daily at 6:00 AM ET.

AI Litigation Sentiment

Bearish on AI Neutral Bullish on AI

Week-over-Week

Trending on 𝕏 — AI Litigation


AI Litigation by the Numbers

Data-driven overview of AI litigation trends based on court filings, Stanford HAI AI Index, McKool Smith tracker, and academic research.

AI-Related Cases Filed Per Year

Case Types Distribution

Daubert Challenge Outcomes (AI Evidence)

Top Jurisdictions for AI Litigation

Data Note: Case filing estimates drawn from Stanford HAI AI Index (2024), McKool Smith AI Litigation Tracker (50+ pending federal cases as of 2025), WilmerHale Year in Review (2024), and Copyright Alliance analysis. Daubert outcome data reflects published federal rulings involving AI or algorithmic evidence. Jurisdiction data based on district-level filing concentration.

Active and Landmark Litigation

Key federal cases defining AI liability, intellectual property rights, employment discrimination, privacy, and criminal justice. All cases verified through court records and legal databases.

Case Court Filed Type Status Description
The New York Times Co. v. Microsoft Corp. et al. S.D.N.Y. Dec. 2023 Copyright Active The Times alleges OpenAI and Microsoft used millions of its articles to train GPT models without authorization. Claims copyright infringement, unfair competition, and trademark dilution. MDL consolidated with related cases before Judge Stein.
Authors Guild v. OpenAI Inc. S.D.N.Y. Sep. 2023 Copyright Active Class action by prominent authors (John Grisham, Jodi Picoult, George R.R. Martin, et al.) alleging systematic copyright infringement through LLM training. Consolidated in MDL before Judge Stein in S.D.N.Y.
Doe v. GitHub, Inc. et al. N.D. Cal. Nov. 2022 Copyright Active Class action alleging GitHub Copilot (built on OpenAI Codex) reproduces open-source code without attribution. Claims violations of DMCA, license terms, and privacy rights. Partially survived motions to dismiss.
Andersen v. Stability AI Ltd. et al. N.D. Cal. Jan. 2023 Copyright Active Visual artists challenge Stability AI, Midjourney, and DeviantArt for training image generators on copyrighted artwork without consent. Key test of fair use in generative AI visual outputs.
Getty Images (US), Inc. v. Stability AI, Inc. D. Del. Feb. 2023 Copyright Active Getty alleges Stability AI copied over 12 million images from its library to train Stable Diffusion. Claims copyright and trademark infringement. Parallel UK proceeding also pending.
Tremblay v. OpenAI, Inc. N.D. Cal. Jun. 2023 Copyright Active Authors Paul Tremblay and Mona Awad allege ChatGPT was trained on pirated copies of their books obtained from shadow library datasets. Tests the scope of training data liability.
Chabon v. Meta Platforms, Inc. N.D. Cal. Sep. 2023 Copyright Active Michael Chabon and other authors allege Meta trained LLaMA models on pirated book datasets. Court ruled in June 2025 largely for Meta, finding training use may constitute fair use.
Thomson Reuters v. ROSS Intelligence Inc. D. Del. May 2020 Copyright Decided Thomson Reuters alleged ROSS Intelligence used Westlaw content to train its AI legal research platform. In February 2025, the court found ROSS liable for copyright infringement -- the first major AI training data copyright ruling at trial.
UMG Recordings v. Suno, Inc. D. Mass. Jun. 2024 Copyright Active RIAA-backed action by major record labels alleging AI music generator Suno was trained on copyrighted sound recordings. Suno asserts fair use defense. First case targeting AI-generated music at scale.
UMG Recordings v. Uncharted Labs (Udio) S.D.N.Y. Jun. 2024 Copyright Active Companion RIAA action to the Suno case, targeting AI music generation service Udio for mass infringement of copyrighted sound recordings through AI training.
Concord Music Group v. Anthropic PBC M.D. Tenn. Oct. 2023 Copyright Active Music publishers allege Claude chatbot reproduces copyrighted song lyrics. Court denied Anthropic's motion to dismiss contributory and vicarious infringement claims (Oct. 2025). Additional publishers joined in Jan. 2026.
Mata v. Avianca, Inc. S.D.N.Y. Feb. 2022 Consumer Prot. Sanctions Attorneys sanctioned after submitting a legal brief containing fabricated case citations generated by ChatGPT. Judge Castel imposed $5,000 sanctions. Became a watershed moment for AI reliability in legal practice.
Mobley v. Workday, Inc. N.D. Cal. Feb. 2023 Employment Active Putative class action alleging Workday's AI-powered screening tools discriminate based on race, age, and disability. Court held AI vendors can be liable as employer "agents" (Jul. 2024). ADEA class conditionally certified (May 2025).
EEOC v. iTutorGroup / Wanlida E.D.N.Y. May 2022 Employment Settled EEOC alleged tutoring company programmed AI recruitment software to automatically reject applicants over age 55 (women) and 60 (men). Settled for $365,000 in August 2023. First EEOC enforcement action against AI hiring discrimination.
FTC v. Rite Aid Corp. FTC Administrative Dec. 2023 Privacy Decided FTC alleged Rite Aid deployed facial recognition surveillance that disproportionately produced false matches for women and people of color. Rite Aid banned from using facial recognition for five years under consent order.
Gonzalez v. Google LLC U.S. Supreme Court Oct. 2022 Section 230 Decided Family of Paris terror attack victim alleged YouTube's AI recommendation algorithm promoted ISIS content. Supreme Court declined to narrow Section 230's scope (May 2023), leaving algorithmic liability questions unresolved.
Williams v. City of Detroit Wayne County Circuit Apr. 2021 Privacy Active Robert Williams was wrongfully arrested based on a flawed facial recognition match. First known case of a wrongful arrest caused by facial recognition technology. Brought national attention to algorithmic bias in policing.
State v. Loomis Wis. Supreme Court Jul. 2016 Criminal Justice Decided Landmark challenge to COMPAS algorithmic risk assessment tool in sentencing. Wisconsin Supreme Court upheld use provided it is not the determinative factor in sentencing and its limitations are disclosed. Cert. denied by SCOTUS.
Methodology: Cases selected based on precedential significance, active status, and coverage across AI litigation categories. Data verified through PACER, court dockets, and legal news sources including McKool Smith AI Litigation Tracker, Copyright Alliance, and Reuters Legal. Last updated February 2026.

Key Admissibility Decisions

Significant Daubert and Frye rulings addressing the reliability and admissibility of AI-generated or algorithmic evidence in federal and state courts.

Case Court Year AI/Algorithm Type Outcome Significance
Daubert v. Merrell Dow Pharmaceuticals U.S. Supreme Court 1993 Statistical Foundation Established the four-factor reliability test (testability, peer review, error rate, general acceptance) now applied to AI evidence. 509 U.S. 579.
Kumho Tire Co. v. Carmichael U.S. Supreme Court 1999 Technical Extended Extended Daubert to all expert testimony including technical and specialized knowledge, broadening applicability to AI and algorithmic systems. 526 U.S. 137.
State v. Loomis Wis. Supreme Court 2016 COMPAS Risk Score Admitted w/ Limits Upheld use of COMPAS algorithmic risk assessment in sentencing but required disclosure of limitations. Cannot be sole determinative factor. 881 N.W.2d 749.
Houston Fed. of Teachers v. Houston ISD S.D. Tex. 2017 EVAAS Algorithm Challenged Teachers challenged proprietary "value-added" algorithm used for termination decisions. Court found due process concerns with opaque algorithmic decision-making in employment.
United States v. Gissantaner 6th Circuit 2021 Probabilistic Genotyping Admitted Sixth Circuit found STRmix probabilistic genotyping software reliable under Daubert, based on scientific testing and peer review. Key precedent for algorithmic forensic evidence.
Mata v. Avianca, Inc. S.D.N.Y. 2023 LLM Output Sanctions While not a formal Daubert ruling, Judge Castel's sanctions order addressed reliability of LLM-generated legal research, establishing that AI outputs require independent verification.
Thomson Reuters v. ROSS Intelligence D. Del. 2025 AI Legal Research Infringement Found First major AI training data copyright trial. Court examined the technical methodology of AI training, establishing evidentiary standards for demonstrating AI system capabilities and training data use.
Napoleon v. State (BulletProof PG) Okla. Crim. App. 2026 Probabilistic Genotyping Admitted Oklahoma Court of Criminal Appeals held BulletProof probabilistic genotyping software met Daubert factors. Joins growing circuit split on algorithmic forensic evidence standards.
FRE 702 Amendment (2023) All Federal Courts 2023 All Expert/AI Rule Change December 2023 amendment requires experts to affirmatively demonstrate (not merely assert) reliable application of methodology to facts. Directly impacts AI evidence gatekeeping.
Proposed FRE 707 Advisory Committee 2025 Machine-Generated Pending Proposed new rule would apply Rule 702 reliability standards to machine-generated evidence offered without an accompanying expert witness. Approved by Judicial Conference; public comment period open.

Machine-Generated Evidence: Regulatory Timeline

Tracking the development of Proposed FRE 707 -- the first federal evidence rule specifically designed to address AI and machine-generated evidence in court proceedings.

December 2023
FRE 702 Amendment Takes Effect
Amended Rule 702 requires expert witnesses to affirmatively demonstrate reliable application of their methodology. Raises the bar for AI-related expert testimony and catalyzes discussions about a dedicated AI evidence rule.
Spring 2024
Advisory Committee Begins AI Evidence Study
The Advisory Committee on Evidence Rules initiates a formal study of AI and machine-learning evidence challenges. Considers whether existing rules adequately address machine-generated outputs offered without expert testimony.
November 2024
Advisory Committee Agrees to Develop New Rule
At its fall meeting, the Advisory Committee agrees to develop a formal proposal for a new Rule 707 that would require federal courts to apply Rule 702 standards to machine-generated evidence, even when no human expert testifies.
May 2, 2025
Judicial Conference Publishes Draft Rule 707
The U.S. Judicial Conference's Standing Committee on Rules of Practice and Procedure publishes Proposed Rule 707 -- "Machine-Generated Evidence." Also proposes amendments to Rule 901(b)(10) for authenticating AI-generated content.
June 2025
Judicial Conference Approves for Public Comment
The Judicial Conference formally approves Proposed Rule 707 for public comment. The rule would require proponents to demonstrate sufficient data inputs, reliable principles and methods, and valid outputs -- mirroring the Rule 702 framework.
August 16, 2025
Public Comment Period Opens
The Committee on Rules of Practice and Procedure issues the draft for formal public comment alongside amendments to 10 other rules across appellate, bankruptcy, civil, criminal, and evidence categories.
February 2026 (Current)
Public Comment Period Ongoing
Comments continue to be received from bar associations, technology companies, legal academics, and practitioners. Key debates: scope of "machine-generated evidence," burden allocation, and interplay with existing Daubert framework.
2026 -- 2027 (Projected)
Final Adoption and Effective Date
Following public comment, the Advisory Committee will revise the proposed rule. If approved by the Judicial Conference and Supreme Court without Congressional action, the rule could take effect December 1, 2027 at the earliest.
Sources: Advisory Committee on Evidence Rules meeting minutes, Judicial Conference publications, National Law Review, Nelson Mullins analysis, Quinn Emanuel analysis, Barnes & Thornburg alerts, NYU Compliance & Enforcement blog. Key reference: Proposed Rule 707 -- Machine-Generated Evidence, Standing Committee on Rules of Practice and Procedure (May 2025).

Need Expert Analysis for Your AI Case?

The Criterion AI provides expert witness services, AI system audits, and litigation consulting for complex technology disputes.

Retain an Expert