Insights & Analysis
Expert perspectives on AI litigation, algorithmic accountability, and the evolving legal landscape of artificial intelligence.
AI engineers at major companies are discovering their models are defective, biased, or dangerous. The coming wave of whistleblower litigation, spanning products liability, securities fraud, and retaliation claims, will reshape the industry.
Read Article →From hiring algorithms that penalize resume gaps for Hajj to facial recognition that fails on hijabs and turbans, AI systems are creating a new category of religious discrimination claims at the intersection of AI bias and religious liberty law.
Read Article →Hospital AI scribes like Nuance DAX and Abridge are documenting millions of patient encounters. When they misrecord a diagnosis or omit a critical finding, the malpractice implications are profound and the standard of care is still forming.
Read Article →AI agents are making purchases, executing trades, and entering contracts with real money. Agency law was never designed for principals whose agents have no legal personhood, and the liability questions are genuinely novel.
Read Article →As AI systems gain the ability to take independent actions, the gap between what these systems can do and what the law is prepared to handle is widening rapidly. An expert analysis of the emerging liability framework.
Read Article →Large language models generate convincing but fabricated content with alarming regularity. When hallucinated outputs cause real harm, the question of liability implicates developers, deployers, and the users who relied on the output.
Read Article →The EU AI Act is the most comprehensive AI regulation in the world, and its extraterritorial reach means US companies cannot ignore it. A practical guide to compliance obligations, risk classification, and enforcement.
Read Article →AI hiring tools promise efficiency and objectivity. In practice, they often replicate and amplify the biases present in historical hiring data, creating new forms of employment discrimination at scale.
Read Article →Deepfake technology has reached the point where fabricated evidence can survive casual scrutiny. For the legal system, which depends on the authenticity of evidence, this represents an existential challenge requiring new forensic approaches.
Read Article →Inside the algorithms that deny prior authorization requests at scale, the lawsuits challenging them, and how expert witnesses can prove algorithmic harm in court.
Read Article →From the New York Times v. OpenAI copyright battle to the explosion of AI product liability claims, 2025 has become a watershed year for AI litigation. An expert witness perspective on the cases reshaping the field.
Read Article →