The Criterion AI Blog

Expert perspectives on AI litigation, algorithmic accountability, and the evolving legal landscape of artificial intelligence.

February 2026 · AI Industry

The AI Whistleblower Wave: When Engineers Know the Model Is Broken

AI engineers at major companies are discovering their models are defective, biased, or dangerous. The coming wave of whistleblower litigation, spanning products liability, securities fraud, and retaliation claims, will reshape the industry.

Read Article →
February 2026 · AI Discrimination

Religious Discrimination by Algorithm: When AI Systems Cannot Handle Faith-Based Edge Cases

From hiring algorithms that penalize resume gaps for Hajj to facial recognition that fails on hijabs and turbans, AI systems are creating a new category of religious discrimination claims at the intersection of AI bias and religious liberty law.

Read Article →
February 2026 · Healthcare AI

The AI Medical Scribe Problem: When Clinical Documentation AI Gets It Wrong

Hospital AI scribes like Nuance DAX and Abridge are documenting millions of patient encounters. When they misrecord a diagnosis or omit a critical finding, the malpractice implications are profound and the standard of care is still forming.

Read Article →
February 2026 · AI Liability

AI Agents With Wallets: Liability When Autonomous Systems Execute Financial Transactions

AI agents are making purchases, executing trades, and entering contracts with real money. Agency law was never designed for principals whose agents have no legal personhood, and the liability questions are genuinely novel.

Read Article →
February 2026 · AI Liability

Agentic AI and the Liability Gap: When Autonomous Systems Act Without Permission

As AI systems gain the ability to take independent actions, the gap between what these systems can do and what the law is prepared to handle is widening rapidly. An expert analysis of the emerging liability framework.

Read Article →
February 2026 · AI Litigation

AI Hallucinations in the Courtroom: Who's Liable When LLMs Fabricate?

Large language models generate convincing but fabricated content with alarming regularity. When hallucinated outputs cause real harm, the question of liability implicates developers, deployers, and the users who relied on the output.

Read Article →
February 2026 · AI Regulation

The EU AI Act Is Here: What US Companies and Their Lawyers Need to Know

The EU AI Act is the most comprehensive AI regulation in the world, and its extraterritorial reach means US companies cannot ignore it. A practical guide to compliance obligations, risk classification, and enforcement.

Read Article →
February 2026 · Employment AI

The Algorithmic Hiring Trap: How AI Screening Tools Discriminate

AI hiring tools promise efficiency and objectivity. In practice, they often replicate and amplify the biases present in historical hiring data, creating new forms of employment discrimination at scale.

Read Article →
February 2025 · Digital Forensics

Synthetic Evidence: When AI-Generated Photos, Audio, and Video Enter Litigation

Deepfake technology has reached the point where fabricated evidence can survive casual scrutiny. For the legal system, which depends on the authenticity of evidence, this represents an existential challenge requiring new forensic approaches.

Read Article →
February 2025 · Health Insurance AI

When AI Denies Your Health Insurance Claim: The Legal Reckoning

Inside the algorithms that deny prior authorization requests at scale, the lawsuits challenging them, and how expert witnesses can prove algorithmic harm in court.

Read Article →
February 2025 · AI Litigation

The AI Liability Landscape in 2025: What Every Litigator Needs to Know

From the New York Times v. OpenAI copyright battle to the explosion of AI product liability claims, 2025 has become a watershed year for AI litigation. An expert witness perspective on the cases reshaping the field.

Read Article →