BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Is Tesla Responsible for the Deadly Crash On Auto-Pilot? Maybe.

Following
This article is more than 7 years old.

Tesla’s Autopilot had its first fatality, the company announced yesterday. Statistically, this was bound to happen. The self-driving car had broadsided a truck that its sensors didn’t detect, and the driver didn’t see it either.

Does Tesla have any responsibility for the accident, even if the driver was supposed to be watching the road at all times?

The argument for why the driver, not Tesla, was responsible is that the driver had agreed to always monitor the road, in case of emergency situations exactly like this that the car cannot handle. This is part of the company’s standard agreement before it allows customers to use the Autopilot feature, which is still in beta-testing mode since its introduction last October. (Beta-testing is working out the last bugs in a product before its official release to the public.)

The Autopilot feature isn’t perfect, but it’s pretty good. All by itself, it can stay in its highway lane and keep a safe distance with the car in front of it. It can swerve away from another vehicle to avoid a collision. But it can’t detect cars that are stopped on the highway. Technology has its limits.

Tesla’s Autopilot is one of the very first of its kind. As new, unproven technology in beta-mode, naturally the human operators should monitor the system and the road, in case they need to take over in an emergency.

But just because Tesla’s customers agreed to actively monitor the road doesn’t necessarily mean they bear all the responsibility, even if not paying attention causes a crash. Here’s why.

Human guinea pigs?

First, should companies beta-test autonomous driving technologies on public roads to begin with? That’s been a key concern in ethics for years, well before Tesla introduced its Autopilot feature.

This is different than beta-testing, say, office software. If your office app crashed, you might just lose data and some work. But with automated cars, the crash is literal. Two tons of steel and glass, crashing at highway speeds, means that people are likely to be hurt or killed. Even if the user had consented to this testing, other drivers and pedestrians around the robot car haven’t.

Beta-testing on public roads, then, looks like human-subjects research, and this is usually governed by an ethics board in research labs and universities. Industry is slowly realizing this—such as Facebook, after its emotional manipulation experiments—as their products become more impactful on people, psychologically and physically. As far as the public knows, Tesla doesn’t have an ethics board to ensure it was doing right by its customers and public as, say, Google-DeepMind has for its artificial intelligence research.

But the problem with questioning the ethics of beta-testing automated cars is that it’s mostly academic. The law is either silent on the question in most states, and the presumption is what’s not illegal is legal; or it is permitted in some states with an eye toward innovation and economic benefits, not so much with ethics in mind. So, even if this beta-testing is unethical, there hasn’t been much incentive for companies to avoid it.  Still, companies such as General Motors have stated that they would not beta-test on customers, perhaps mindful of ethics and responsibility.

Informed consent?

It’s also unclear whether Tesla’s beta-testers were fully informed of the risk in the first place. Did they know that death was a possibility? Were they reminded that humans are not good at sitting and monitoring a system for extended periods, that boredom and distractions can quickly set in? Did they know that it can take 2 to 30 seconds or more for a human to regain situational awareness, if the car needed to hand control back to the human driver? Were they aware of the technology’s limitations?

The situation is much different from monitoring the auto-pilot system of an airplane. Pilots are professionally trained; they have ongoing training; they’re routinely drug-tested; they have requirements on the amount of sleep or rest they must have before a flight; they usually have a co-pilot as a backup; and so on. Meanwhile, most everyday car drivers only had to pass a 45-minute driving test back when they were teenagers. Even so, airline pilots still get into crashes, so there’s not much hope that ordinary car drivers can babysit computer systems any better, while ready to take over with nearly no warning.

This "handoff problem” is exactly why some experts are deeply skeptical about Level 3 automation, where control of the car is shared and gets passed between machine and human. Google’s autonomous cars are planned to have Level 4 automation, where no driver intervention is ever needed, thus avoiding this problem. Most cars today only have Level 2 automation, such as anti-lock braking systems.

Unreasonable expectations?

By all accounts, Tesla’s Autopilot works well, even if it can’t handle many situations that human drivers trivially can. For instance, it has trouble with merging and on/off ramps where lane markings disappear. But it works well enough that it’s only human for drivers to be quickly lulled into a false sense of security, operating perfectly for hours at a time.

While users may have agreed to monitoring the system every second on the road, it seems to go against human nature to be that robotically focused. The temptation for your mind to wander off could be irresistible, especially as time passes and boredom sets in. This is a known problem. Therefore, Tesla’s request to do exactly that may be unrealistic, just as it’s foreseeably unrealistic to ask a group of little kids to not touch the candy sitting in front of them.

Further, most people don’t read the terms and conditions of their software agreements, which often takes a law degree to understand. Even when they do understand what they’re getting into, they don’t really understand. Think about all the broken wedding vows to love, honor, and cherish each other for the rest of their lives—we understand the words and have the best intentions, but fail to appreciate what it means to really make a life-long promise.

….

We don’t have all the details yet of how the Tesla crash happened, so we don’t yet know what happened and who’s at fault. But it’s not too early to think through how beta-testing of autonomous cars should proceed on public roads, if at all.

Just by making drivers agree to be responsible and unblinkingly monitor a self-driving system doesn’t necessarily make them responsible, because there are other factors at play. For the same reason, giving customers the choice of how a car should react in a weird crash dilemma—an “ethics setting”—also doesn’t necessarily release the manufacturer of all responsibility for that hard decision.

And this is just one side of the debate. Later this month in San Francisco, we’re already planning to address this exact issue in a half-day ethics session at the Automated Vehicles Symposium, sponsored by the Transportation Research Board and AUVSI. Tesla’s unfortunate accident reminds us that these conversations aren’t just academic but are all too real.

~~~

Acknowledgements: This work is supported by the US National Science Foundation and California Polytechnic State University, San Luis Obispo. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the aforementioned organizations.