Why We Really Should Ban Autonomous Weapons: A Response

Autonomous weapons could lead to low-cost micro-robots that can be deployed to anonymously kill thousands. That's just one reason why they should be banned

4 min read
Autonomous weapons could lead to low-cost micro-robots that can be deployed to anonymously kill thousands. That's just one reason why they should be banned
Photo: Getty Images + Dreamscope

This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE.

We welcome Evan Ackerman’s contribution to the discussion on a proposed ban on offensive autonomous weapons. This is a complex issue and there are interesting arguments on both sides that need to be weighed up carefully. This process is well under way, and several hundred position papers have been written in the last few years by think tanks, arms control experts, and nation states. His article, written as a response to an open letter signed by over 2500 AI and robotics researchers, makes four main points:

(1) Banning a weapons system is unlikely to succeed, so let’s not try.

(2) Banning the technology is not going to solve the problem if the problem is the willingness of humans to use technology for evil.

(3) The real question that we should be asking is this: Could autonomous armed robots perform more ethically than armed humans in combat?

(4) What we really need, then, is a way of making autonomous armed robots ethical.

Note that his first two arguments apply to any weapons system. Yet the world community has rather successfully banned biological weapons, space-based nuclear weapons, and blinding laser weapons; and even for arms such as chemical weapons, land mines, and cluster munitions where bans have been breached or not universally ratified, severe stigmatization has limited their use. We wonder if Ackerman supports those bans and, if so, why.

“A treaty can be effective in this regard by stopping an (autonomous weapons) arms race and preventing large-scale manufacturing of such weapons. Moreover, a treaty certainly does not apply to defensive anti-robot weapons, even if they operate in autonomous mode”

Argument (2) amounts to the claim that as long as there are evil people, we need to make sure they are well armed with the latest technology; to prevent them from gaining access to the most effective means of killing people is to “blame the technology” for the evil inclinations of humans. We disagree. The purpose of preventing them from gaining access to the technology is to prevent them from killing large numbers of people. A treaty can be effective in this regard by stopping an arms race and preventing large-scale manufacturing of such weapons. Moreover, a treaty certainly does not apply to defensive anti-robot weapons, even if they operate in autonomous mode.

The question (3) is in our opinion a rather irrelevant distraction from the more important question of whether to start an arms race. It is an interesting point that we discuss in the open letter and it represents exactly the pro-weapon position espoused over the last several years by some participants in the debate. The current answer to this question is certainly no: AI systems are incapable of exercising the required judgment. The answer might eventually change, however, as AI technology improves. But is it actually “the real question,” as Ackerman asserts? We think not. His argument, like those of others before him, has an implicit ceteribus paribus assumption that, after the advent of autonomous weapons, the specific killing opportunities—numbers, times, locations, places, circumstances, victims—will be exactly those that would have occurred with human soldiers, had autonomous weapons been banned. This is rather like assuming that cruise missiles will only be used in exactly those settings where spears would have been used in the past. Obviously, the assumption is false. Autonomous weapons are completely different from human soldiers and would be used in completely different ways. As our open letter makes clear, the key issue is the likely consequences of an arms race—for example, the availability on the black market of mass quantities of low-cost, anti-personnel micro-robots that can be deployed by one person to anonymously kill thousands or millions of people who meet the user’s targeting criteria. Autonomous weapons are potentially weapons of mass destruction. While some nations might not choose to use them for such purposes, other nations and certainly terrorists might find them irresistible.

“Autonomous weapons are completely different from human soldiers and would be used in completely different ways. (...) The key issue is the likely consequences of an arms race—for example, the availability on the black market of mass quantities of low-cost, anti-personnel micro-robots that can be deployed by one person to anonymously kill thousands or millions of people who meet the user’s targeting criteria”

Which leads to Ackerman’s fourth point: his proposed alternative plan of making autonomous armed robots ethical. But what more specifically is this plan? To borrow a phrase from the movie Interstellar, in Ackerman’s world robots will always have their “humanitarian setting” at 100 percent. Yet he worries about enforcement of a ban in his first argument: how would it be easier to enforce that enemy autonomous weapons are 100 percent ethical than to enforce that they are not produced in the first place? Moreover, one cannot consistently claim that the well-trained soldiers of civilized nations are so bad at following the rules of war that robots can do better, while at the same time claiming that rogue nations, dictators, and terrorist groups are so good at following the rules of war that they will never choose to deploy robots in ways that violate these rules.

One point on which we agree with Ackerman is that negotiating and implementing a ban will be hard. But as John F. Kennedy emphasized when announcing the Moon missions, hard things are worth attempting when success will greatly benefit the future of humanity.

Stuart Russell is a professor of computer science and director of the Center for Intelligent Systems at UC Berkeley, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach.” Max Tegmark is a professor of physics at MIT and co-founder of the Future of Life Institute. Toby Walsh is a professor of AI at the University of New South Wales and NICTA, Australia, and president of the AI Access Foundation.

The Conversation (0)