Skip to main contentSkip to navigationSkip to navigation
Computers are now beating humans at poker. What’s next for artificial intelligence?
Computers are now beating humans at poker. What’s next for artificial intelligence? Photograph: AP
Computers are now beating humans at poker. What’s next for artificial intelligence? Photograph: AP

It's time for some messy, democratic discussions about the future of AI

This article is more than 7 years old
and Andrew Maynard

With a new set of principles for artificial intelligence, tech pioneers seem to be developing a conscience. Good – but the discussion must include more voices

Today in Washington DC, leading US and UK scientists are meeting to share dispatches from the frontiers of machine learning – an area of research that is creating new breakthroughs in artificial intelligence (AI). Their meeting follows the publication of a set of principles for beneficial AI that emerged from a conference earlier this year at a place with an important history.

In February 1975, 140 people – mostly scientists, with a few assorted lawyers, journalists and others – gathered at a conference centre on the California coast. A magazine article from the time by Michael Rogers, one of the few journalists allowed in, reported that most of the four days’ discussion was about the scientific possibilities of genetic modification. Two years earlier, scientists had begun using recombinant DNA to genetically modify viruses. The Promethean nature of this new tool prompted scientists to impose a moratorium on such experiments until they had worked out the risks. By the time of the Asilomar conference, the pent-up excitement was ready to burst. It was only towards the end of the conference when a lawyer stood up to raise the possibility of a multimillion-dollar lawsuit that the scientists focussed on the task at hand – creating a set of principles to govern their experiments.

The 1975 Asilomar meeting is still held up as a beacon of scientific responsibility. However, the story told by Rogers, and subsequently by historians, is of scientists motivated by a desire to head-off top down regulation with a promise of self-governance. Geneticist Stanley Cohen said at the time, ‘If the collected wisdom of this group doesn’t result in recommendations, the recommendations may come from other groups less well qualified’. The mayor of Cambridge, Massachusetts was a prominent critic of the biotechnology experiments then taking place in his city. He said, ‘I don’t think these scientists are thinking about mankind at all. I think that they’re getting the thrills and the excitement and the passion to dig in and keep digging to see what the hell they can do’.

The concern in 1975 was with safety and containment in research, not with the futures that biotechnology might bring about. A year after Asilomar, Cohen’s colleague Herbert Boyer founded Genentech, one of the first biotechnology companies. Corporate interests barely figured in the conversations of the mainly university scientists.

Fast-forward 42 years and it is clear that machine learning, natural language processing and other technologies that come under the AI umbrella are becoming big business. The cast list of the 2017 Asilomar meeting included corporate wunderkinds from Google, Facebook and Tesla as well as researchers, philosophers, and other academics. The group was more intellectually diverse than their 1975 equivalents, but there were some notable absences – no public and their concerns, no journalists, and few experts in the responsible development of new technologies.

The principles that came out of the meeting are, at least at first glance, a comforting affirmation that AI should be ‘for the people’, and not to be developed in ways that could cause harm. They promote the idea of beneficial and secure AI, development for the common good, and the importance of upholding human values and shared prosperity.

This is good stuff. But it’s all rather Motherhood and Apple Pie: comforting and hard to argue against, but lacking substance. The principles are short on accountability, and there are notable absences, including the need to engage with a broader set of stakeholders and the public. At the early stages of developing new technologies, public concerns are often seen as an inconvenience. In a world in which populism appears to be trampling expertise into the dirt, it is easy to understand why scientists may be defensive.

But avoiding awkward public conversations helps nobody. Scientists are more inclined to guess at what the public are worried about than to ask them, which can lead to some serious blind spots – not necessarily in scientific understanding (although this too can occur), but in the direction and nature of research and development.

Where troublesome short-term applications are discussed by AI researchers, they are often interpreted in ways that are convenient to engineers. Ethicists, for example, have been quick to see self-driving cars as a test case for their ‘trolley problems’; ethical dilemmas in which a machine is forced to choose between killing, say, a bus queue of pedestrians or its own driver – irrespective of whether this is the most pressing issue for manufacturers, drivers and communities.

Looking further into future, AI engineers and philosophers have joined a chorus of concern over the possible folly of a super-intelligence causing a global apocalypse, despite rather long odds on the viability of this scenario. We are seeing an AI echo chamber, in which speculative discussions have taken on a moral significance that far exceeds their social importance, at the expense of more pressing challenges.

The reality is that AI is already a thing in the world, enabling and constraining our lives in ways that we barely understand. Over the coming years AI-based technologies are going to impact how we work, travel, communicate, date and buy things. The effective governance of AI urgently needs to get beyond follies and trollies, and the decisions can’t just be taken by a narrow group of experts. There’s a pretty high chance that, if asked, citizens would say they are less concerned about AI ending the world and more interested in how AI could affect their livelihood and security and how the benefits and risks will be distributed.

As we’ve found from other technologies like nanotechnology and synthetic biology, innovation that is responsive to people’s needs requires partnerships across many different stakeholders – including citizens. It demands a sophisticated understanding of the social, economic and environmental landscape around emerging risks and benefits. And it relies on people from all walks of life and areas of expertise having a say in what their collective future will look like.

This can get messy. It involves engaging with people who may not see the world the same way. But without such grounded approaches to responsible innovation, the chances of beneficial AI becoming a reality begin to dwindle.

The new Asilomar principles are a starting point. But they don’t dig into what is really at stake. And they lack the sophistication and inclusivity that are critical to responsive and responsible innovation. To be fair, the principles’ authors realize this, presenting them as ‘aspirational goals’. But within the broader context of a global society that is faced with living with the benefits and the perils of AI, they should be treated as hypotheses – the start of a conversation around responsible innovation rather than the end. They now need to be democratically tested.

Most viewed

Most viewed