OpenAI's recent announcement of agentic AI services for enterprise clients was not surprising to anyone following the industry. What was notable was the target market. Financial services firms were among the first adopters, deploying AI agents that can autonomously analyze market data, generate investment theses, execute trades, and rebalance portfolios. These are not chatbots answering customer questions. These are autonomous systems making consequential financial decisions with real money.

The legal question is obvious and urgent: when an AI agent loses your money, who pays? The answer depends on which legal framework applies, and right now, none of the available frameworks fits cleanly. Fiduciary duty, negligence, product liability, and securities regulation each capture part of the problem. None captures all of it.

The Fiduciary Duty Problem

Financial advisors owe their clients a fiduciary duty: the obligation to act in the client's best interest with the care, skill, and diligence that a prudent professional would exercise. When a human advisor makes a bad investment, the fiduciary framework is straightforward. Did the advisor conduct adequate research? Was the recommendation suitable for the client's risk profile? Did the advisor disclose conflicts of interest?

When an AI agent makes the same bad investment, the fiduciary analysis fractures. The AI agent is not a fiduciary. It has no legal personhood, no professional license, and no duty of loyalty. But someone deployed the AI agent to perform a fiduciary function. The firm that deployed the agent presumably owes a fiduciary duty to the client. The question is whether the firm satisfied that duty by deploying an AI system it believed was competent, or whether delegation to an AI system is itself a breach of the duty of care.

The analogy to human delegation is instructive but imperfect. A financial advisor can delegate research tasks to a junior analyst. If the junior analyst makes an error, the advisor remains responsible because the advisor supervised the work and made the final decision. But agentic AI systems are designed to operate without human supervision for each decision. The entire point of agentic AI is that it acts autonomously. If the firm must review every decision the AI makes, the efficiency gains disappear. If the firm does not review every decision, it is delegating fiduciary judgment to a machine.

A fiduciary cannot outsource its duty of loyalty to an algorithm. But the financial industry is doing exactly that, and the law has not decided whether this is a permissible delegation or an inherent breach.

Negligence: The Standard of Care Question

Negligence claims against firms deploying financial AI agents will turn on the standard of care. What would a reasonably prudent financial services firm do when deploying an AI agent to manage client assets?

The standard is still forming, but several elements are emerging. A prudent firm would validate the AI system's performance against historical data and out-of-sample scenarios. It would implement risk limits that constrain the AI's ability to make outsized bets. It would monitor the AI's decisions in real time and maintain the ability to override or halt the system. It would conduct stress testing to understand how the AI behaves in market conditions outside its training distribution, such as flash crashes, liquidity crises, and black swan events. And it would disclose to clients that their assets are being managed by an AI system, including the system's known limitations.

Firms that fail to take these precautions will face negligence claims when AI-driven losses occur. The challenge for plaintiffs is establishing what the standard of care requires, because industry practice is still evolving. The challenge for defendants is demonstrating compliance with a standard that may be higher than current industry practice suggests, because courts may conclude that the risks of autonomous financial AI demand greater caution than the industry has exercised.

Product Liability: Is the AI a Defective Product?

When an AI agent makes a catastrophic financial decision, product liability offers an alternative theory. The AI system is a product. It was designed by a technology company (OpenAI, in this case), deployed by a financial services firm, and used to manage a client's assets. If the AI system has a defect that causes financial harm, both the designer and the deployer may be liable under product liability principles.

The design defect analysis for financial AI is complex. Under the risk-utility test, the question is whether the AI system's design created risks that outweighed its benefits. An AI agent that generates strong returns 95% of the time but occasionally suffers catastrophic losses due to a known failure mode presents a classic risk-utility problem. The benefits are real. The risks are foreseeable. The question is whether the designer could have implemented safeguards to mitigate the catastrophic failure mode without eliminating the system's benefits.

The failure-to-warn theory may be even more powerful. If the AI system has known limitations, such as poor performance during high-volatility events or susceptibility to adversarial market manipulation, and the technology company did not adequately warn the deploying firm, the technology company faces strict liability for the failure to warn. Similarly, if the deploying firm knew about the AI's limitations and did not disclose them to clients, the firm faces its own failure-to-warn liability.

The Regulatory Gap

The Securities and Exchange Commission and the Financial Industry Regulatory Authority have not issued comprehensive guidance on the use of agentic AI in financial services. This is a significant gap, because the existing regulatory framework was designed for a world in which humans make investment decisions and machines execute them.

The SEC's existing rules on investment advisers require registration, disclosure, and compliance with fiduciary standards. But these rules assume a human advisory relationship. When an AI agent is the de facto advisor, questions arise about who is the registered adviser, what disclosures are adequate, and how the fiduciary standard applies to algorithmic decision-making.

FINRA's rules on suitability and best execution similarly assume human judgment. The suitability rule requires that investment recommendations be appropriate for the specific client's financial situation and objectives. An AI agent that manages thousands of client accounts simultaneously may apply a suitability analysis that is statistically sound in the aggregate but inappropriate for specific clients whose circumstances the AI does not fully understand.

The regulatory gap creates risk for firms that deploy financial AI agents and opportunity for firms that get ahead of the regulatory curve. Firms that develop robust internal governance frameworks for AI deployment, including testing protocols, risk limits, monitoring systems, and client disclosure practices, will be better positioned both to avoid regulatory action and to defend against private litigation.

The Coming Litigation

The first wave of agentic AI finance litigation will likely involve cases where AI agents suffered outsized losses during market dislocations. These are the cases where the AI's behavior diverges most dramatically from what clients expected, and where the damages are largest. The AI agent that performs well in calm markets but panics during a volatility spike, executing a rapid series of trades that lock in losses, will generate both client complaints and regulatory scrutiny.

The second wave will involve more subtle harms: AI agents that systematically underperform because of training data biases, AI agents that generate excessive trading to maximize the deploying firm's fee revenue, and AI agents that fail to account for tax implications or client-specific constraints. These cases will be harder to litigate because the harm is diffuse and the causation is complex, but the aggregate damages may be enormous.

Expert witnesses in these cases will need to evaluate the AI system's architecture, training data, decision-making process, and risk management framework. They will need to compare the AI's behavior against both the standard of care and the firm's representations to clients. And they will need to explain, in terms a jury can understand, how an autonomous AI system can make decisions that no human authorized and that the deploying firm may not have anticipated.

The financial industry's embrace of agentic AI is driven by genuine efficiency gains and competitive pressure. But efficiency without accountability is a recipe for harm. The legal system will impose accountability. The only question is whether firms build it in now or have it imposed on them later, at a much higher cost.

The Criterion AI provides expert witness services and litigation support for matters involving artificial intelligence, machine learning, and algorithmic decision-making. For a confidential consultation on an active or anticipated matter, contact us at info@thecriterionai.com or call (617) 798-9715.