The National Highway Traffic Safety Administration is investigating Tesla's Full Self-Driving (FSD) system. The scope of the investigation covers approximately 2.9 million vehicles equipped with FSD, making it one of the largest automotive safety probes in history. NHTSA's inquiry focuses on whether FSD's AI-driven driving capabilities create an unreasonable risk to safety, particularly in conditions where the system's performance degrades in ways that drivers may not anticipate.
For product liability attorneys, this investigation is the inflection point they have been anticipating. Autonomous driving AI creates a liability puzzle that existing frameworks struggle to solve. The question at the center of every FSD crash case is deceptively simple: who is driving? The answer, it turns out, is no one and everyone at the same time.
The Product Liability Framework
Traditional automotive product liability follows well-established patterns. A vehicle has a defect in design, manufacturing, or warnings. The defect causes an accident. The manufacturer is liable. The analysis is clean because the vehicle is a static product. A defective brake caliper is defective when it leaves the factory and remains defective until it fails.
FSD breaks this framework in a fundamental way. The "product" is not static. It is a neural network that processes real-time sensor data and makes driving decisions multiple times per second. Its behavior changes with every software update Tesla pushes over the air. A vehicle that drives safely on Monday may behave differently on Tuesday after an overnight update. The "product" the consumer purchased is not the product that caused the accident, because the product has been continuously modified by the manufacturer after sale.
This creates complications under each theory of product liability.
Design defect. Under the risk-utility test, a product has a design defect if the risks of the design outweigh its benefits. For FSD, this analysis requires comparing the AI system's overall safety record against human drivers. Tesla argues that FSD is statistically safer than human driving, and the aggregate data may support this claim. But aggregate safety statistics are cold comfort to the plaintiff whose family member was killed by a specific FSD failure. The relevant question in litigation is not whether FSD is safer on average, but whether the specific failure mode that caused this accident reflects a design defect that Tesla could have avoided.
Manufacturing defect. Traditional manufacturing defect claims involve a product that deviates from its intended design. For software, every copy is identical. There is no manufacturing variation. But FSD's behavior is not identical across vehicles because it depends on sensor calibration, local driving conditions, and the specific data the system encounters. Two identical Teslas running identical FSD software may behave differently in the same intersection because their cameras have slightly different calibrations or because one vehicle's neural network has been exposed to different real-world conditions. Whether this variability constitutes a "manufacturing defect" is an open question.
Failure to warn. Tesla's warnings about FSD are extensive on paper. The system requires drivers to keep their hands on the wheel and eyes on the road. But the name "Full Self-Driving" communicates something that contradicts the warnings. If a reasonable consumer interprets "Full Self-Driving" to mean the car can drive itself fully, the gap between the product's name and its actual capabilities is itself a failure to warn. NHTSA has previously flagged this concern, and it is likely to be central to the investigation's findings.
Tesla called it Full Self-Driving. The fine print says you must supervise it constantly. When the name and the disclaimer contradict each other, the name wins in the mind of the consumer. That gap is where liability lives.
Who Is Liable: Tesla, the Driver, or the Algorithm?
The liability allocation question in FSD cases is genuinely novel. Three potential defendants exist, and the interactions between them create a web of competing arguments.
Tesla as manufacturer. Tesla designed and deployed FSD. It pushes software updates to vehicles without the owner's affirmative consent for each update. It collects driving data from its fleet and uses that data to train its AI models. Under a strict product liability theory, Tesla is liable for defects in FSD regardless of the driver's conduct. The strength of this theory depends on proving that the specific failure was a defect rather than a limitation that the driver should have anticipated.
The driver as operator. Tesla's defense in every FSD case begins with the driver. The driver agreed to the terms of service. The driver was warned to maintain attention. The driver had the ability and the obligation to intervene. If the driver was not paying attention when FSD made an error, Tesla argues, the driver's negligence is the proximate cause of the accident. This argument has surface appeal, but it creates a logical problem. If the system requires constant human supervision to be safe, it is not meaningfully autonomous, and selling it as "Full Self-Driving" is misleading. Tesla cannot simultaneously claim the system is advanced enough to justify its name and primitive enough to require constant human oversight.
The algorithm as decision-maker. In a growing number of cases, attorneys are arguing that the AI system itself is the relevant decision-maker. The algorithm decided to accelerate, to brake, to steer, or to do nothing. The driver and Tesla both relied on the algorithm's judgment. When the algorithm's judgment was wrong, the question is not who was "driving" in the traditional sense, but who bears responsibility for the algorithm's decision. This framing shifts the analysis from negligence to product liability, which generally favors plaintiffs because it eliminates the need to prove fault.
The Expert Witness Role
FSD litigation requires expert witnesses who can do something that very few people can do: explain to a jury how a neural network makes driving decisions, why it fails in specific ways, and whether those failures were foreseeable and preventable.
The technical analysis in an FSD case involves several layers. First, the expert must reconstruct what the AI system perceived. Tesla vehicles record sensor data, including camera feeds and radar readings. The expert analyzes this data to determine what the AI "saw" in the moments before the accident. Second, the expert must explain how the AI processed that information. This requires understanding the neural network architecture, its training data, and its known failure modes. Third, the expert must evaluate whether the AI's decision was reasonable given its inputs, or whether the system's response reflects a defect in design, training, or deployment.
On the defense side, Tesla's experts will argue that the system performed within its design parameters, that the driver was warned about the system's limitations, and that the specific failure was an edge case that no autonomous driving system could have handled. The battle of experts will often come down to whether the failure was a known failure mode that Tesla could have addressed, or a genuinely novel scenario that was not foreseeable.
NHTSA's investigation will generate a wealth of data relevant to these cases. The agency's findings on FSD's safety record, its failure modes, and Tesla's responsiveness to known issues will become foundational evidence in private litigation. Plaintiffs' attorneys should be monitoring the investigation closely, because NHTSA's technical analysis will provide the evidentiary foundation for years of FSD product liability claims.
What Comes Next
The NHTSA probe will likely result in one of three outcomes: a finding that FSD does not present an unreasonable safety risk, a requirement for specific technical changes or enhanced warnings, or a recall. A recall of 2.9 million vehicles would be unprecedented in the context of a software-based driving system, but NHTSA has ordered Tesla recalls before for FSD-related issues.
Regardless of the regulatory outcome, the private litigation is already underway and accelerating. Each FSD accident generates a potential lawsuit. Each lawsuit requires technical expert witnesses who can bridge the gap between neural network architecture and product liability doctrine. The attorneys who invest now in understanding FSD's technology will be positioned to handle the wave of cases that the NHTSA investigation is about to supercharge.
The autonomous driving revolution promised to make roads safer. It may yet deliver on that promise. But the transition, in which AI systems that are better than average but far from perfect are sharing roads with human drivers who may not understand the technology's limitations, is generating litigation at a pace that will define automotive product liability for a generation.
The Criterion AI provides expert witness services and litigation support for matters involving artificial intelligence, machine learning, and algorithmic decision-making. For a confidential consultation on an active or anticipated matter, contact us at info@thecriterionai.com or call (617) 798-9715.