X
Innovation

Stephen Hawking issues familiar warning to China against AI

Both Stephen Hawking and Tesla's Elon Musk feel that the capacity of artificial intelligence (AI) to surpass human intelligence necessitates watching its development closely -- but not everyone thinks that machines are so smart.
Written by Rajiv Rao, Contributing Writer
hawking.jpg
(Image: Hawking.org)

It's a pet obsession that both physicist Stephen Hawking and entrepreneur Elon Musk -- two people credited with pushing the boundaries of technology and science in pioneering ways -- have in common: Rampaging intelligence, borne within machines, could end the human race if utilized dangerously.

"I believe there is no real difference between what can be achieved by a biological brain and what can be achieved by a computer," said Hawking in a video appearance at the 2017 Global Mobile Internet Conference Beijing. "Humans, who are limited by slow biological evolution, couldn't compete and could be superseded by AI," he added.

This isn't the first time that Hawking has issued strong caution against AI, having previously warned that it could be "either the best, or the worst thing, ever to happen to humanity."

These are not just dire-sounding prognostications especially tailor-made for the lecture circuit. Hawkings and Musk also signed the Asilomar AI Principles, a twenty-three point roadmap to ensure that AI would be used for good in the future. These 'principles' lie within a framework created by the Future Life Institute (FLI), founded in March 2014 by MIT cosmologist Max Tegmark, Skype cofounder Jaan Tallinn, and DeepMind research scientist Viktoriya Krakovna. Both Hawking and Musk are on the institute's board of advisors.

Hawking's familiar pronouncement will have all the more resonance in China, a country that in just two to three years has become a force to be reckoned with in AI, and especially so in the sub-category of machine learning, which includes 'deep learning.' As the Atlantic reports, Chinese researchers in the field have become so prominent today that the annual meeting of the industry's biggest AI minds, the Association of the Advancement of Artificial Intelligence, had to be hastily rescheduled when it was realized that the dates clashed with Chinese New Year. Apparently, an equal number of papers submitted came from Chinese universities as they did from US ones.

Consequently, you shouldn't be surprised to hear that leading Chinese companies in the 'new' economy are knee-deep in AI. Baidu (the Chinese equivalent of Google), Didi (China's Uber rival), and Tencent (a maker of games and owner of messaging app giant WeChat) all have their own AI labs. Baidu apparently has as many as 1,300 staff in its AI division and is targeting a launch of its own self-driving cars in 2020. So, don't be surprised if the the next pioneering movies in AI come from there instead of Silicon Valley.

It therefore makes eminent sense that the topic of 'singularity' -- a concept where machines surpass the intelligence of their human-creators toward no good, first coined by British mathematician Irving John Good in the 1960s and popularized by mathematician and fiction writer Victor Vinge in the 1990s -- is being addressed to the next wave of minds that will increasingly need to grapple with this fundamental issue.

Yet, not everyone is buying Hawking's dystopian vision. Luciano Floridi professor of philosophy and ethics of information at the Oxford Internet Institute and a research associate at its Department of Computer Science, dismisses this apocalyptic vision in his recent paper The Ethics of Artificial Intelligence, which examines the plausibility of such a doomsday scenario.

"The serious risk is not the appearance of some ultra-intelligence, but that we may misuse our digital technologies, to the detriment of a large percentage of humanity and the whole planet," he said. Lee Kai-fu, the former greater China president of Google and founder of the venture capital firm Sinovation Ventures, doesn't think these outcomes can be so easily formulated based on today's science.

Hawking himself believes that AI is a potential source of tremendous good, and it could be instrumental in solving the world's chronic and apparently elusive problems of poverty and disease. But, like Musk, who once compared AI to a nuclear bomb, he thinks it needs some serious oversight.

"We spend a great deal of time studying history," Hawking once said, "which, let's face it, is mostly the history of stupidity. So it's a welcome change that people are studying instead the future of intelligence."

Facebook made its own AI-powered assistant for Messenger

Editorial standards