On August 1, 2024, the EU AI Act officially entered into force, making it the first comprehensive legal framework for artificial intelligence anywhere in the world. If you are a US-based company that sells products or services to European customers, uses AI systems that affect people in the EU, or provides AI models that European companies integrate into their products, this law applies to you. Full stop.
I have spent the past year advising companies and their legal teams on what the EU AI Act actually requires from a technical standpoint. The gap between what most US companies think the law demands and what it actually mandates is significant. That gap represents both compliance risk and litigation exposure.
The Enforcement Timeline
The EU AI Act rolls out in phases, and understanding the timeline is critical for compliance planning.
February 2, 2025: The prohibition on "unacceptable risk" AI systems takes effect. This includes social scoring systems, real-time biometric identification in public spaces (with narrow exceptions), and AI that exploits vulnerabilities of specific groups. US companies operating in the EU need to confirm immediately that none of their systems fall into these categories.
August 2, 2025: Obligations for general-purpose AI (GPAI) models take effect. This is where things get interesting for US AI companies. If you provide a foundation model or large language model that European companies use as a component in their own systems, you are subject to transparency requirements, technical documentation obligations, and copyright compliance measures. For models classified as presenting "systemic risk" (roughly, models trained with compute exceeding 10^25 FLOPs), additional requirements apply, including adversarial testing, incident reporting, and cybersecurity measures.
August 2, 2026: The full requirements for "high-risk" AI systems take effect. This is the most consequential date. High-risk categories include AI used in hiring and employment, credit scoring, insurance underwriting, educational assessment, law enforcement, immigration, and critical infrastructure. For each of these applications, the Act requires conformity assessments, risk management systems, data governance measures, human oversight mechanisms, and ongoing monitoring.
The EU AI Act is not a suggestion. It is a regulation with fines up to 35 million euros or 7% of global annual turnover, whichever is higher. For context, that penalty structure is more aggressive than GDPR.
Extraterritorial Reach: Why US Companies Are in Scope
Like GDPR before it, the EU AI Act applies based on where the effects are felt, not where the company is headquartered. Article 2 makes the scope explicit: the Act applies to providers placing AI systems on the EU market, deployers of AI systems located in the EU, and providers and deployers located outside the EU where the output of their AI system is used in the EU.
That third category is remarkably broad. If a US company operates an AI-powered hiring platform and a European subsidiary of a multinational client uses it to screen candidates in Germany, the US company is subject to the Act. If a US fintech company's credit scoring algorithm is used by a European partner, the same logic applies. Even if a US company's AI chatbot serves European customers from US-based servers, the Act may apply because the output is "used in" the EU.
The practical implication is that most large US technology companies with international operations are in scope for at least some provisions of the EU AI Act. Pretending otherwise is not a compliance strategy.
What the Act Actually Requires: A Technical Perspective
For high-risk AI systems, the technical requirements are substantial and specific. As someone who has built these systems, I can tell you that retrofitting compliance onto existing systems is significantly harder than building it in from the start.
Risk management: Companies must implement a risk management system that operates throughout the AI system's lifecycle. This is not a one-time assessment. It requires continuous identification, analysis, and mitigation of risks, with documented evidence at each stage.
Data governance: Training, validation, and testing datasets must meet quality criteria including relevance, representativeness, and freedom from errors. Companies must document the data collection methodology, any gaps or shortcomings in the data, and measures taken to address bias. For companies that trained their models on web-scraped data without careful curation, this requirement alone presents a major compliance challenge.
Technical documentation: The Act requires detailed documentation of the system's architecture, development methodology, training procedures, validation results, and performance metrics. This documentation must be sufficient for regulators to assess the system's compliance. Many US AI companies treat their model architecture as proprietary. The EU AI Act may require disclosure that companies are not accustomed to providing.
Human oversight: High-risk AI systems must be designed to allow effective human oversight. This means the system must provide sufficient transparency for the human overseeing it to understand and intervene in its operation. For fully automated decision-making systems, this requirement may necessitate significant architectural changes.
General-Purpose AI Model Obligations
The GPAI provisions deserve special attention because they target the foundation model providers, most of which are US companies. OpenAI, Google, Meta, Anthropic, and others must comply with transparency requirements that include maintaining technical documentation, providing information to downstream deployers, implementing copyright compliance policies, and publishing training content summaries.
For models designated as "systemic risk" models, additional requirements include conducting model evaluations and adversarial testing, tracking and reporting serious incidents, ensuring adequate cybersecurity, and reporting energy consumption. These requirements take effect in August 2025, giving companies limited time to prepare.
Litigation Risk for US Companies
The litigation implications extend beyond direct enforcement actions by EU authorities. EU AI Act compliance, or lack thereof, is already being cited in US litigation as evidence of the standard of care. When a plaintiff argues that a defendant's AI system would be classified as high-risk under the EU AI Act and that the defendant failed to conduct the assessments that European law requires, they are effectively asking the court to consider an international regulatory framework as relevant to the domestic standard of care.
This strategy has precedent. GDPR compliance requirements have been cited in US data breach litigation as evidence of industry standards. The EU AI Act is likely to follow the same pattern, particularly in cases involving AI hiring discrimination, healthcare AI, and financial services AI where the Act's high-risk categories overlap with existing US liability theories.
For defense attorneys, the implication is clear: if your client's AI system is compliant with the EU AI Act's requirements for its risk category, that compliance can serve as evidence of reasonable care. If your client has ignored the Act entirely, plaintiffs' counsel will use that gap aggressively.
Practical Steps for US Companies and Their Counsel
Conduct an AI inventory. You cannot assess your compliance obligations without knowing what AI systems you operate, where they are deployed, and who they affect. Many companies are surprised to discover how many AI systems they have in production.
Classify your systems by risk category. Map each AI system against the Act's risk classification framework. Pay particular attention to AI used in HR, finance, and customer-facing applications, as these are most likely to fall into the high-risk category.
Start documentation now. The technical documentation requirements are extensive and cannot be compiled retroactively. Begin documenting your training data, model architecture, validation methodology, and performance metrics immediately.
Engage technical expertise. Compliance with the EU AI Act requires understanding both the legal requirements and the technical systems to which they apply. Attorneys who advise clients on AI compliance without technical expertise are operating blind. Retain an expert who can bridge both domains.
The EU AI Act is not a distant regulatory threat. It is current law with imminent enforcement deadlines. US companies that prepare now will have a competitive advantage. Those that wait will face enforcement risk, litigation exposure, and the costly reality of retroactive compliance.
The Criterion AI provides expert witness services and litigation support for matters involving artificial intelligence, machine learning, and algorithmic decision-making. For a confidential consultation on an active or anticipated matter, contact us at criterion@thecriterionai.com or call (617) 798-9715.