On March 18, 2026, Federal Reserve Chair Jerome Powell told the Senate Banking Committee that the Fed had not taken rate hikes off the table. This was not a casual remark. Oil prices had surged past $110 per barrel following the escalation of hostilities in the Persian Gulf, and the latest CPI print showed inflation ticking back up to 4.2%. The market dropped 600 points in the hour after Powell's testimony. The yield on the 10-year Treasury jumped 18 basis points.
For the AI industry, Powell's remarks are not just a macroeconomic signal. They are a liability event.
The AI Boom Was Built on Cheap Money
The explosive growth of the AI industry from 2023 through mid-2025 was fueled by capital that was, by historical standards, extraordinarily cheap. Venture capital firms deployed record amounts into AI startups. Public companies announced tens of billions in AI infrastructure spending. Microsoft committed $80 billion to AI data centers. Google, Amazon, and Meta each announced comparable investments. Smaller companies levered up to fund AI transformations, often at the encouragement of management consultants and board advisors who warned that falling behind in AI would be existential.
Much of this spending was justified by projections that assumed continued revenue growth, favorable capital markets, and rapid returns on AI deployment. Those projections were built on assumptions that are now crumbling.
When capital was cheap, the calculus was simple: borrow at 3%, invest in AI infrastructure, and generate returns over a 5 to 10 year horizon. When capital costs 7% or more, that same investment may never generate a positive return. And when revenue growth slows because the economy is contracting, the timeline for returns extends further, potentially into infinity.
Where the Liability Emerges
The transition from boom to bust creates liability exposure in several distinct categories.
Securities fraud. Public companies that made aggressive representations about AI revenue projections, deployment timelines, and return on investment are now exposed to securities fraud claims if those representations were materially misleading when made. The key legal question is not whether the projections turned out to be wrong (projections often are), but whether management had information suggesting the projections were unreasonable at the time they were made. If internal analyses showed that AI deployments were underperforming expectations, that customer adoption was slower than projected, or that the technology was not delivering the promised efficiency gains, and management continued to make bullish public statements, that is the foundation of a 10b-5 claim.
Fiduciary duty. Board members who approved massive AI capital expenditures face potential fiduciary duty claims if those decisions were not adequately informed. The business judgment rule provides significant protection, but it is not absolute. If a board approved a $5 billion AI infrastructure investment without adequate due diligence on the technical feasibility, market demand, or downside scenarios, shareholders may argue that the decision was not the product of informed business judgment. The technical complexity of AI investments makes this analysis particularly challenging: did the board actually understand what they were approving, or did they rely on management presentations that oversimplified the risks?
Vendor liability. Companies that sold AI solutions with performance guarantees or implied warranties of fitness face liability when those solutions fail to deliver in a tightening economy. If an AI vendor promised that its system would reduce a client's customer service costs by 40%, and the system achieves only a 10% reduction while introducing new categories of error, the client has potential claims for breach of warranty, breach of contract, and potentially fraud if the vendor knew or should have known the claims were inflated.
Employment liability. Companies that laid off thousands of workers to fund AI deployments, and then find that the AI systems cannot perform the work those workers did, face a particularly uncomfortable form of liability. The workers may have wrongful termination claims (particularly if the layoffs had disparate impact on protected classes). The company may face negligent misrepresentation claims from investors who were told the AI transition would be seamless. And management may face personal liability if they made representations about AI capabilities that they knew were exaggerated.
The Expert Witness Role in AI Investment Disputes
AI investment liability cases require technical expertise that goes beyond traditional financial analysis. A forensic accountant can calculate damages. A securities expert can analyze market impact. But neither can assess whether the AI technology at the center of the dispute actually works as claimed. That is where AI expert witnesses become essential.
Evaluating AI performance claims. When a company represents that its AI system achieves a particular level of accuracy, efficiency, or capability, an AI expert can evaluate whether that claim is supported by the technical evidence. This involves examining the system's architecture, training data, evaluation methodology, and deployment performance. In my experience, the gap between marketing claims and actual performance is often substantial, and the internal documentation frequently reveals that the company was aware of this gap.
Assessing deployment feasibility. Many AI investment decisions were based on deployment timelines that were, in retrospect, unrealistic. An AI expert can assess whether the projected timeline was reasonable given the state of the technology, the complexity of the deployment environment, and the company's technical capabilities. If management projected full deployment in 12 months for a system that, by any reasonable technical assessment, would require 36 months or more, that discrepancy is relevant to both fiduciary duty and securities fraud claims.
Quantifying technical risk. AI systems carry technical risks that are not always apparent to non-technical decision-makers: data drift, model degradation, adversarial vulnerability, scalability limitations, and integration complexity. An expert can assess whether these risks were adequately disclosed to investors and adequately considered by the board in its decision-making process.
The AI boom was built on projections. The bust will be litigated on evidence. The gap between what companies promised and what the technology actually delivered will be measured in expert reports, not press releases.
The Stagflation Scenario
The worst case for AI companies is not a recession. It is stagflation: rising prices combined with stagnant or negative growth. In a simple recession, costs fall and companies can restructure. In a stagflationary environment, costs rise (energy, compute, talent) while revenue stagnates or declines. AI companies are particularly vulnerable because their cost structure is dominated by compute and energy, both of which rise sharply during an oil-driven inflationary shock.
Consider the math. A large AI company operating a fleet of GPU clusters consumes enormous amounts of electricity. When oil prices spike and electricity costs rise 30 to 50%, the cost of running those clusters rises proportionally. At the same time, enterprise customers facing their own cost pressures are cutting AI budgets, reducing usage, and demanding renegotiation of contracts. Revenue falls while costs rise. Margins compress. The investment thesis collapses.
For companies that financed their AI infrastructure with floating-rate debt, the situation is even worse. As the Fed raises rates, their borrowing costs increase on top of their rising operational costs. This is the kind of scenario that produces not just losses, but insolvencies. And insolvencies produce litigation: creditor claims, shareholder derivative suits, and fraud investigations.
What Should Companies Do Now
Stress-test your AI investments. Run scenarios that assume rate hikes, reduced revenue, and increased compute costs. If your AI investments are not viable under these scenarios, that is information your board needs now, not after the losses materialize.
Document your decision-making. Ensure that AI investment decisions are supported by thorough documentation: board presentations, due diligence reports, risk assessments, and dissenting views. The absence of documentation is not neutral in litigation. It creates an inference that the decision was not adequately informed.
Review your public statements. If your company has made public representations about AI capabilities, revenue projections, or deployment timelines, review them now against the current reality. If there is a material gap, consider corrective disclosure. The cost of a voluntary disclosure is always lower than the cost of a securities fraud class action.
Preserve evidence. If you are anticipating that AI investments may underperform, implement a litigation hold now. Preserve internal communications, performance data, board materials, and any documents that reflect management's knowledge of AI performance relative to projections. Spoliation of evidence in a subsequent securities fraud case would be catastrophic.
The Litigation Wave Is Coming
Every major economic downturn produces a wave of securities litigation. The dot-com bust generated hundreds of class actions. The 2008 financial crisis produced even more. The AI bust, if it materializes, will follow the same pattern. But AI litigation will be technically more complex than anything that came before, because the underlying technology is more complex, the performance claims are harder to evaluate, and the chain of causation between investment decisions and losses runs through technical systems that most judges and juries do not intuitively understand.
This is where expert witnesses will be decisive. The ability to explain AI technology to a non-technical audience, to evaluate whether performance claims were reasonable, and to connect technical failures to financial losses will determine the outcome of the cases that define this era.
Powell's testimony was a signal. The market heard it. The question now is whether corporate boards and their legal counsel are listening too.
The Criterion AI provides expert witness services and litigation support for matters involving artificial intelligence, machine learning, and algorithmic decision-making. For a confidential consultation on an active or anticipated matter, contact us at info@thecriterionai.com or call (617) 798-9715.