Subscribe now

Humans

The White House talks AI, but does it understand?

By Brendan Byrne

27 July 2016

Scene-setter of a New York skyline at night with whooshy lights

A bewildering world for the White House

John Lund/Blend Images/Getty

“Data is public in the same way the White House is.”

So said Jer Thorp, a Brooklynbased software artist,  in an “AI Now” workshop on 7 July organised by the White House’s Office of Science and Technology Policy.  Thorp meant to highlight the uncertain ownership status of our personal data. He might just as well have been referring to the inscrutability of the White House itself: AI now?

This was the final session in the White House’s series exploring the uses of artificial intelligence and their implications. But why, in the final months of Barack Obama’s administration, and faced with a host of social and environmental problems, is the focus on artificial intelligence?

It’s natural to imagine that this is just a spot of legacy-burnishing from a president who made the drone strike integral to his country’s foreign policy. Is the White House simply hitching itself to the current AI hype cycle, or does it actually have a position on the subject? The answer seems to be neither. It looks as though the US government genuinely does not know what its position should be, and is trying to learn as much as possible, as fast as possible – a vision that’s as strategically disconcerting as it is intellectually admirable.

There is a great deal of anxiety here. The White House’s report on big data, published in May, highlighted the way data, algorithmically processed, can worsen social discrimination. Ask an expert system a mean-spirited or wrong-headed question, and it will surely answer in the same spirit.

Kate Crawford, a principal researcher at Microsoft Research, pressed the point in the first AI Now workshop in May, citing a recent article about how algorithms which predict future criminals are biased against black people.

Worthy of Monty Python

The next session also focused on the real-world implications of high-level processing. You could unclog city congestion, track bird flight, or anticipate police brutality. Here too, we had a barn-burning speech by Roy Austin, of the White House Domestic Policy Council, explaining the myriad problems the criminal justice system faces in exploiting big data: “We are not ready for machine learning,” he said, also emphasising that it would accentuate racial bias.

But though the conclusion of the White House’s report called for “accountability mechanisms” to address the problem of discrimination, its suggestions are weak beer indeed: “Encourage market participants to design the best algorithmic systems”, it says, exhibiting a faux naivety worthy of Monty Python.

Across the series, little distinction was made between AI, machine learning and advanced algorithmic processing. Questions of cognition, consciousness, personhood and potential citizenship were not explored. Few speakers bothered to mention that a potential AI might quite likely approach problems in a distinctively non-human manner and so be able to tackle problems that humans seem utterly unable to grapple with (climate change, for instance).

The determined if slightly muddled focus on the here and now was a relief in some ways. At least attendees did not have to sit through yet another retelling of Nick Bostrom’s deeply paranoid Superintelligence (2014), which warns of the rise of asingleton, defined as “some form of agency that can solve all major global coordination problems”.

A kind of corporate-hacktivist ethos permeated the most recent session, and an atmosphere of quiet positivity. No one felt any pressing need to ask Yann LeCun, now director of AI Research at Facebook, how much fun he was having, playing with all the data he now has access to. Latanya Sweeney, professor of government and technology at Harvard University, and a computer scientist, noted that she seemed to be the only techno-pessimist in the room.

It is heartening that federal government, at least in its current guise, is speaking plainly about the failures of the criminal justice system. But dire warnings, without proscriptive legislation or executive orders, do little. For instance, whistle-blowing news site The Intercept has revealed that the FBI is expanding use of its flawed facialrecognition technologies.

Worryingly, it appears that the White House called a conference on AI primarily so that it could warn people that AI should not be allowed anywhere near some sections of government. As one colleague remarked, “This shows you how scared they are about this happening.”

Brendan Byrne is a writer and critic based in New York

Topics:

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox! We'll also keep you up to date with New Scientist events and special offers.

Sign up