fb-pixelEthics, efficiency, and artificial intelligence - The Boston Globe Skip to main content
OPINION

Ethics, efficiency, and artificial intelligence

Should we allow machines to impersonate humans?

A metal head made of motor parts symbolizes artificial intelligence at the 2019 Essen Motor Show.Martin Meissner/Associated Press

In 2018, Google unveiled Duplex, an artificial intelligence-powered assistant that sounds eerily human-like, complete with ‘umms’ and ‘ahs’ that are designed to make the conversation more natural. The demo had Duplex call a salon to schedule a haircut and then call a restaurant to make a reservation.

As Google’s CEO Sundar Pichai demonstrated, the system at Google’s I/O (input/output) developer conference, the crowd cheered, hailing the technological achievement. Indeed, this represented a big leap toward developing AI voice assistants that can pass the “Turing Test,” which requires machines to be able to hold conversations while being completely indistinguishable from humans.

But not everyone was so enthusiastic. Some technology commentators saw it as a form of “deception by design.” In a 2018 tweet, prominent University of North Carolina techno-sociologist Zeynep Tufekici described the system as “horrifying,” and wrote: “Silicon Valley is ethically lost, rudderless, and has not learned a thing.”

Advertisement



Responding to public pressure, a Google spokeswoman said in a statement, “We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified.”

But what if knowing that we are interacting with a bot made for a worse human experience? Suppose you are interacting with a customer service agent that you know is just a computer program. Might you give yourself a little more license to use abusive language or to lob insults? After all, you are not going to hurt any real human beings. As satisfying as this might be, could this shift in your behavior lead to longer and less efficient customer service calls and a worse overall experience for you?

To explore these questions, we ran studies in which participants played a cooperation game with either a human associate or a bot that used AI to adapt its behavior to maximize its payoffs. This game was designed to capture situations in which each of the interacting parties can either act selfishly in an attempt to exploit the other, or act cooperatively in an attempt to attain a mutually beneficial outcome.

Advertisement



In some instances, participants were told who they were interacting with: a human or a bot. In others, we gave false information about the associate’s identity. Some were told they were interacting with a bot when they were actually interacting with a human, and others were told they were interacting with a human, when in fact it was a bot.

The results showed that bots posing as humans were very efficient at persuading the partner to cooperate in the game. In fact, these bots were better at eliciting cooperation with humans than other humans were. When the bot’s true nature was revealed, however, cooperation rates dropped significantly, and the bots’ superiority was negated.

In fact, among all conditions we studied, the best outcome was achieved when people interacted with bots but were told they were interacting with humans. This is precisely the situation that outraged people over the Google Duplex demo and that caused Google to back off and indicate that they will disclose the nonhuman nature of the system.

As AI systems continue to approach — or exceed — human-level performance in various tasks, bots will be increasingly capable of passing as humans. In the near future, we will interact with bots on the phone, social media, or even video, in a variety of contexts, from business to government to entertainment — and they will be indistinguishable from their human counterparts.

Advertisement



Our research reveals that while the much-touted algorithmic transparency is important, it may sometimes come at a cost. So now we must ask ourselves: Should we allow companies to deceive us into thinking bots are human if this makes us happier customers or more polite, cooperative people? Or does interacting with a machine we believe is a human violate something sacred, like human dignity? What is important to us: transparency or efficiency? And in what context might we prefer one or the other? Although there is broad consensus that machines should be transparent about how they make decisions, it is less clear whether they should be transparent about who they are.

Science, including our own experiment, cannot answer this question, since it is a question about what we value most — transparency or efficiency. Maybe we can have both and as humans learn to work cooperatively with machines. But until we do, society needs to recognize and grapple with the ethics and trade-offs.

Talal Rahwan is an associate professor at New York University Abu Dhabi. Jacob Crandall is an associate professor at Brigham Young University. Fatimah Ishowo-Oloko is a PhD graduate from Khalifa University. Iyad Rahwan is an associate professor at MIT and director of the Max Planck Institute for Human Development.