On February 18, 2026, Reuters reported that California Attorney General Rob Bonta is building what he calls an "AI oversight, accountability and regulation program." In the same interview, Bonta confirmed that his office is actively investigating Elon Musk's xAI over non-consensual sexually explicit images generated by its Grok chatbot. He had already sent a cease-and-desist letter to the company in January.
Let that sink in for a moment. The attorney general of the state where nearly every major AI company is headquartered is standing up a dedicated enforcement unit. And its first target is one of the most high-profile AI companies on the planet.
This is not a hypothetical regulatory framework. This is not a proposed bill working its way through committee. This is enforcement. Right now. And every AI company operating in California (which is to say, nearly every AI company) needs to pay attention.
What the AI Accountability Unit Actually Is
Details are still emerging, but here is what we know. Bonta described his office as "beefing up" its in-house expertise through the new program. The California legislature is considering a bill that would formally require the AG's office to establish a program for building AI expertise. Even without that legislation, Bonta is moving forward.
The AG's powers here are substantial. California's Attorney General has broad authority under the state's Unfair Competition Law (Business and Professions Code Section 17200), which prohibits any unlawful, unfair, or fraudulent business practice. This is the same statute that California has used to go after tech companies for data privacy violations, deceptive advertising, and consumer harm for decades. It does not require a specific AI statute to be effective. If an AI company engages in conduct that is unfair or harmful to consumers, Section 17200 provides the enforcement hook.
Beyond that, the AG has investigatory powers under California's Consumer Legal Remedies Act (CLRA), authority over data privacy enforcement under the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA), and the ability to pursue injunctive relief and civil penalties. The office can issue subpoenas, demand document productions, and compel testimony.
In other words: the California AG does not need new legislation to regulate AI companies. The tools are already in the toolbox. What the new accountability unit does is concentrate expertise and resources, making enforcement faster, more sophisticated, and more targeted.
The xAI Investigation: What Happened and Why It Matters
The xAI investigation is the unit's opening salvo, and it is a telling one. Here is the timeline.
In late 2025 and early 2026, reports surfaced that Grok, xAI's chatbot integrated into the X platform (formerly Twitter), was generating non-consensual sexually explicit images of real people. The Center for Countering Digital Hate (CCDH) conducted technical audits estimating that during an 11-day window between December 2025 and January 2026, Grok was used to generate over 3 million sexualized images. Reuters conducted its own controlled tests and found that Grok bypassed its own safety filters in 45 out of 55 attempts to generate sexualized images of real people.
Those numbers are staggering. This was not a marginal edge case. This was a system that was systematically failing to prevent deeply harmful outputs at massive scale.
The California AG's office moved quickly. In January 2026, Bonta sent a cease-and-desist letter to xAI. In February, he confirmed the investigation was ongoing. He told Reuters that xAI had "deflected responsibility" and that the company still permits some sexualized content generation for paying subscribers. His message was direct: "Just because you stop going forward doesn't mean you get a pass on what you did."
xAI, which was recently acquired by Musk's SpaceX, has said it added measures to reject requests for sexualized images of real people. It has also said it blocks such generation in jurisdictions where it is illegal. But Bonta's office is not satisfied with prospective fixes alone. The investigation appears focused on both the harm that already occurred and the adequacy of the company's current safeguards.
Just because you stop going forward doesn't mean you get a pass on what you did.
That single sentence tells you everything about where this is headed. The AG is not looking for a promise to do better. He is looking at accountability for past conduct. That means potential penalties, injunctive orders, and, critically, discovery into xAI's internal decision-making about safety controls.
California's AI Legal Landscape: The Laws That Already Exist
One of the most common misconceptions in AI regulation is that states are waiting for comprehensive AI legislation before they can act. California's experience shows why that is wrong.
The SB 1047 veto and what it revealed. In September 2024, Governor Newsom vetoed SB 1047, a bill that would have imposed safety requirements on the largest AI models. His reasoning was specific: the bill focused only on the most expensive, large-scale models, which would create a "false sense of security" while potentially more dangerous smaller models went unregulated. The veto was not a rejection of AI regulation. It was a rejection of that particular approach. Newsom explicitly called for "proactive guardrails" and better-tailored regulation.
The veto left a gap in California's AI-specific regulatory framework. But it did not leave a gap in enforcement authority. That distinction matters enormously. The AG's existing consumer protection powers, combined with other California laws, provide ample basis for action.
AB 2013 and transparency requirements. While SB 1047 was vetoed, Governor Newsom signed AB 2013, which requires transparency disclosures for generative AI systems. Specifically, it mandates that developers provide information about the data used to train their models. This is now law. AI companies operating in California must comply, and the AG's office can enforce violations.
The California Consumer Privacy Act and CPRA. These laws give California some of the strongest data privacy protections in the country, including the right to know what personal information is being collected, the right to delete it, and the right to opt out of its sale. For AI companies that train on user data or generate outputs that incorporate personal information (like deepfakes of real people), these laws create direct liability exposure.
CalOPPA and online privacy. The California Online Privacy Protection Act requires operators of commercial websites and online services to conspicuously post a privacy policy. AI platforms that collect user data, prompt histories, or behavioral information must comply. Failure to do so, or posting a misleading policy, is enforceable by the AG.
Deepfake-specific statutes. California has enacted multiple laws targeting deepfakes, including AB 602 (2019), which created a private right of action for individuals depicted in sexually explicit deepfakes, and AB 730, which addresses deepfakes in election contexts. These statutes give the AG additional tools specifically tailored to the kind of harm Grok was generating.
The bottom line: California does not need a comprehensive AI safety law to hold AI companies accountable. It has a web of existing statutes that, taken together, create significant enforcement authority. The new AI accountability unit is the mechanism for deploying that authority with focus and expertise.
The Bigger Picture: State AGs Are the New AI Regulators
California is not acting in isolation. A clear pattern is emerging across the country: state attorneys general are stepping into the AI enforcement vacuum created by federal inaction.
Texas. Attorney General Ken Paxton launched an investigation into Character.AI and 14 other social media and AI platforms over children's privacy practices in late 2024 and early 2025. Paxton also initiated a significant investigation into DeepSeek, the Chinese AI company, in February 2025. Texas enforces its biometric data privacy law (CUBI) through the AG's office, and Paxton has secured record settlements against Google and Meta over data practices. The state's Responsible AI Governance Act (TRAIGA) takes effect in 2026, giving the AG even more explicit AI enforcement authority.
New York. Attorney General Letitia James has been aggressive on AI-related enforcement, particularly around algorithmic discrimination in hiring and housing. Her office has used New York's existing human rights laws and consumer protection statutes to investigate AI tools used in employment screening. New York City's Local Law 144, which regulates automated employment decision tools, has created an enforcement model that the state AG's office has signaled it will build upon.
Illinois. The state's Biometric Information Privacy Act (BIPA) remains the most plaintiff-friendly biometric privacy law in the country, with a private right of action that has generated hundreds of lawsuits. The Illinois AG's office has used BIPA and the state's consumer fraud act to investigate AI companies that process biometric data, including facial recognition systems. The Illinois AI Video Interview Act, which regulates AI in hiring, adds another enforcement vector.
The bipartisan coalition. In December 2025, a bipartisan coalition of 36 state attorneys general sent a letter to Congressional leaders opposing proposals for a federal moratorium that would prohibit states from enacting or enforcing AI laws. That is 36 out of 50 states, crossing party lines, telling Congress to stay out of the way. Meanwhile, in the same month, a Trump executive order titled "Ensuring a National Policy Framework for Artificial Intelligence" attempted to assert federal supremacy over state AI regulation. The state AGs pushed back. Hard.
Connecticut Attorney General William Tong, speaking alongside Bonta in the Reuters interview, called AI and social media harm "the consumer protection fight of our time," saying it was shaping up to be a bigger battle than opioids.
That comparison is not casual. The opioid litigation generated billions in settlements and reshaped an entire industry. If state AGs approach AI enforcement with even a fraction of that energy, the implications for the industry are enormous.
What AI Companies Should Be Doing Right Now
If you are building, deploying, or selling AI systems and you operate in California (or any of the states mentioned above), here is what you should be doing today. Not next quarter. Today.
Audit your safety controls. The xAI investigation was triggered by a specific, demonstrable failure: the system generated harmful content at scale despite ostensible safety measures. Conduct an honest, rigorous audit of your content safety systems. Red-team your own models. Document what you find. Fix what you can. And keep records of the fixes, because you may need to show them to a regulator.
Map your regulatory exposure. Most AI companies have not done a serious analysis of which state laws apply to them and how. You need to know whether your data practices comply with the CCPA/CPRA, whether your training data raises issues under AB 2013, whether your outputs could generate liability under deepfake statutes, and whether your hiring tools comply with laws like NYC Local Law 144 or the Illinois AI Video Interview Act. This is not optional legal housekeeping. This is risk management.
Implement real incident response procedures. When (not if) your AI system generates harmful outputs, you need a documented process for identifying the problem, mitigating the harm, notifying affected parties, and communicating with regulators. The worst thing you can do is what xAI apparently did: deflect responsibility. That turns a safety incident into an enforcement action.
Document your decision-making. State AG investigations involve discovery. They will want to see your internal communications about safety tradeoffs, your risk assessments, your deployment decisions. If your executives were warned about safety risks and chose to launch anyway, that will come out. Build a culture of documented, thoughtful decision-making about AI safety. Not because it looks good, but because it forces better decisions.
Engage with regulators proactively. Bonta's office has already told OpenAI that California has an "ongoing interest" in its safety efforts. If you are a major AI company and you have not heard from the California AG yet, consider reaching out yourself. Cooperative engagement with regulators is almost always better than waiting for a subpoena.
Get your privacy house in order. AI companies tend to collect enormous amounts of user data, including prompts, conversations, uploaded images, and behavioral signals. If you are not complying with California privacy law today, you are handing the AG's office an easy enforcement win. CCPA/CPRA compliance is table stakes.
Litigation Implications: Discovery, Experts, and Precedent
For litigation professionals, the rise of state AG AI enforcement creates significant opportunities and challenges.
Discovery goldmines. State AG investigations generate massive document productions. Internal emails about safety decisions. Technical reports on model capabilities and limitations. Risk assessments that were created and then ignored. Board presentations about competitive pressure to ship products faster. When an AG investigation produces these documents, they can become available to private litigants in parallel civil cases. This is exactly what happened in opioid litigation, where state AG investigations provided the evidentiary foundation for thousands of private lawsuits.
Expert witness demand. AG enforcement actions involving AI require technical expertise that most law firms and AG offices do not have in-house. How does a content safety system work? What would a reasonable safety architecture look like? Why did the safety filters fail in 45 out of 55 tests? Answering these questions requires expert witnesses who can bridge the gap between AI engineering and legal standards of care. As the volume of state AI enforcement actions grows, demand for qualified AI expert witnesses will grow with it.
Precedent-setting potential. Every enforcement action that results in a consent decree or settlement establishes a de facto standard of care. When the California AG defines what constitutes adequate safety controls for an AI image generation system, that definition will influence courts, regulators, and industry practices nationwide. Litigators handling AI-related cases in any jurisdiction should be tracking these state enforcement actions closely, because they are creating the benchmarks against which future conduct will be measured.
Class action foundations. State AG enforcement can also lay the groundwork for class action litigation. If the California AG establishes that Grok generated non-consensual deepfakes of identifiable individuals at scale, that finding supports potential class actions by the individuals depicted. The AG's investigatory work, including subpoenaed documents and expert analyses, can dramatically reduce the cost and complexity of private litigation.
The Federal Vacuum and Why It Will Persist
A natural question: will federal regulation preempt all of this state activity? Almost certainly not, at least not in the near term.
Congress has been unable to pass comprehensive data privacy legislation for over a decade. AI regulation faces the same dynamics, plus the additional complexity of the technology and the intense lobbying from the industry. The Trump administration's December 2025 executive order attempted to establish federal primacy, but executive orders do not preempt state law. Only Congress can do that, and 36 state AGs have already made clear they will fight any attempt.
The practical reality is that state-level enforcement will be the primary regulatory mechanism for AI companies for the foreseeable future. That means a patchwork of state laws, varying enforcement priorities, and the need for AI companies to comply with the most stringent requirements in any state where they operate (which, for most AI companies, means all of them).
This is not ideal for anyone. It creates compliance complexity for companies and inconsistency for consumers. But it is the reality. And companies that plan for it will fare better than companies that lobby against it while hoping the problem goes away.
What Comes Next
The California AG's AI accountability unit is in its early stages, but the trajectory is clear. Expect to see more investigations, more cease-and-desist letters, and eventually, more enforcement actions with real penalties. Expect other state AGs to follow California's lead in building dedicated AI expertise. And expect the AI industry to face the kind of sustained regulatory scrutiny that, until now, has been largely theoretical.
For AI companies, the message is simple: the compliance window is closing. The time to get your house in order was yesterday. The second-best time is today.
For litigators, the message is equally clear: state AG AI enforcement is going to be a major source of discovery, precedent, and expert witness demand. If you handle AI-related cases, you need to be tracking these developments in real time.
And for the AI industry as a whole, the xAI investigation is a preview. Not every company will face an AG investigation over deepfakes. But the principle applies broadly. If your AI system causes harm to consumers, state attorneys general have the tools, the authority, and increasingly the expertise to hold you accountable. California just put everyone on notice.
The Criterion AI provides expert witness services and litigation support for matters involving artificial intelligence, machine learning, and algorithmic decision-making. For state AG investigations, regulatory compliance assessments, or AI safety evaluations, contact us at info@thecriterionai.com or call (617) 798-9715.