Skip to main contentSkip to navigationSkip to navigation
Niska
In the Channel 4 series Humans, Niska, played by Emily Berrington, is a conscious synth who kills to survive. Photograph: Channel 4
In the Channel 4 series Humans, Niska, played by Emily Berrington, is a conscious synth who kills to survive. Photograph: Channel 4

When robots kill

This article is more than 8 years old

A recent robot-related death in Germany highlights broader dilemmas in the design of safe autonomous systems.

Last month, a robot grabbed a worker at a Volkswagen plant in Germany, crushing and killing him. This tragic though fairly common incident has drawn attention to the growing dangers of robotics and artificial intelligence (AI).

As computers become smaller, smarter, faster and more interconnected, we will delegate more tasks to them. Generally, this will make our lives easier, because we will spend less time researching information, getting directions, or driving cars. Well-designed programs can often do these sorts of tasks better than we can.

But as we delegate more to computers, more can go wrong. When a GPS navigation system fails, you can get lost; when a self-driving car fails, you can die. So as computers take on increasingly important roles, more effort is needed to ensure they are safe and reliable.

Self-driving cars are a prime example. A recent McKinsey study estimates that self-driving cars could reduce crashes by 90 per cent. Car crashes currently kill around 1.25 million people per year, according to the World Health Organization. If these numbers hold, universal adoption of self-driving cars would save over one million lives per year.

Of course, self-driving cars could potentially fail at a far larger scale than human-driven cars. Today, car crashes are isolated incidents; one dangerous driver has no effect on the rest of the world’s drivers. Self-driving cars could change that: a bug, hacker, or system failure could affect every car on the road. You have to hope you are not on the road if that happens.

The possibility of concurrent failure of self-driving cars and other automated systems makes for a difficult policy challenge. Under normal circumstances, all is well, which fosters complacency. Standards become lax; oversight fades. The very existence of a problem recedes from our memories, until it is too late. Psychologists call this the recency effect: people tend to underestimate the risk of rare, catastrophic events, unless one has recently occurred.

Perhaps self-driving cars would never have a large concurrent failure. Perhaps their design could avoid this, for example, by putting different operating systems in different models, or not networking them together. With such precautions, society could reap the benefits without worrying about such risks.

Another policy challenge is how to assign liability for harms caused by autonomous systems. This is an ongoing aspect of the inquiry in the Volkswagen case. Did the deceased make a mistake? Or does the fault lie with Volkswagen, the robot’s manufacturer, programmers, or even the robot itself? Assigning liability is even murkier when the harm is caused by unexpected emergent behavior, such as when Google Photos recently labeled some black people as “gorillas”. Should Google be held liable for the software it creates, even when it acts in ways that Google neither expects nor condones?

Today’s robots are semi‑autonomous at best, acting on specific goals that an operator sets in motion. But the time may be near when some AIs could be advanced enough to merit the status of a person under the law. A key issue then becomes what is reasonably foreseeable by a manufacturer in relation to such products. If it is reasonable for a self‑driving car to stop fifteen feet short of a hazard, a manufacturer would be liable if its car crashed despite such a warning. If a robot assaults a human in disregard of its instructions, with no hardware or software lapse, then man versus machine could become a literal court case.

Punishment is a trickier matter. Should guilty robots be jailed? Should they be required to pay compensation? Robots could be provided with personal funds for such cases, but even then the money would technically be the manufacturer’s own insurance. We need to continue designing regulations that can account for non‑human systems with intelligent—if not independent—minds.

As autonomous systems become more sophisticated, the stakes rise. This problem is particularly acute in the case of artificial superintelligence (ASI), which vastly exceeds human intelligence in all respects. No ASI exists today, and it may be many decades (if ever) before it is built. AI experts are divided on whether ASI is even possible. But work has begun on making any future ASI safe, including a new grant program which one of us (Baum) is part of, funded by Elon Musk and the Open Philanthropy Project in collaboration with the Future of Life Institute.

If ASI is programmed safely, it could be of wide benefit; if not, it could kill everybody. Because ASI could outsmart humans, some believe it could dominate the planet. Here, the recency effect poses a severe problem: either ASI has not been built yet, in which case there are no recent examples to inspire concern, or it has been built and everyone is dead. ASI policy must be proactive, but that means overcoming our psychological tendencies.

This is where events like the Volkswagen plant incident can be of some value. While tragic for the deceased and his family, such an episode draws attention to the broader dilemmas of autonomous systems safety. It is a canary in the deadly robotic coalmine. We would be wise to heed its signal and make sure that robotic systems stay safe.

Seth Baum (@SethBaum) is Executive Director and Trevor White is a Junior Associate at the Global Catastrophic Risk Institute. White is also a law student at Cornell University. They research policy issues in autonomous systems and other catastrophic risks. The views here are the authors’ alone.

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed