Advertisement

SKIP ADVERTISEMENT

What Exactly Are the Dangers Posed by A.I.?

A recent letter calling for a moratorium on A.I. development blends real threats with speculation. But concern is growing among experts.

An illustration of a fire engine light covered with stickers calling for a pause or halt of A.I.
Credit...Pablo Delcan

Cade Metz writes about artificial intelligence and other emerging technologies.

In late March, more than 1,000 technology leaders, researchers and other pundits working in and around artificial intelligence signed an open letter warning that A.I. technologies present “profound risks to society and humanity.”

The group, which included Elon Musk, Tesla’s chief executive and the owner of Twitter, urged A.I. labs to halt development of their most powerful systems for six months so that they could better understand the dangers behind the technology.

“Powerful A.I. systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.

The letter, which now has over 27,000 signatures, was brief. Its language was broad. And some of the names behind the letter seemed to have a conflicting relationship with A.I. Mr. Musk, for example, is building his own A.I. start-up, and he is one of the primary donors to the organization that wrote the letter.

But the letter represented a growing concern among A.I. experts that the latest systems, most notably GPT-4, the technology introduced by the San Francisco start-up OpenAI, could cause harm to society. They believed future systems will be even more dangerous.

Some of the risks have arrived. Others will not for months or years. Still others are purely hypothetical.

“Our ability to understand what could go wrong with very powerful A.I. systems is very weak,” said Yoshua Bengio, a professor and A.I. researcher at the University of Montreal. “So we need to be very careful.”

Image
Yoshua Bengio spent the past four decades developing the technology that drives systems like GPT-4.Credit...Nasuna Stuart-Ulin for The New York Times

Dr. Bengio is perhaps the most important person to have signed the letter.

Working with two other academics — Geoffrey Hinton, until recently a researcher at Google, and Yann LeCun, now chief A.I. scientist at Meta, the owner of Facebook — Dr. Bengio spent the past four decades developing the technology that drives systems like GPT-4. In 2018, the researchers received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.

A neural network is a mathematical system that learns skills by analyzing data. About five years ago, companies like Google, Microsoft and OpenAI began building neural networks that learned from huge amounts of digital text called large language models, or L.L.M.s.

By pinpointing patterns in that text, L.L.M.s learn to generate text on their own, including blog posts, poems and computer programs. They can even carry on a conversation.

This technology can help computer programmers, writers and other workers generate ideas and do things more quickly. But Dr. Bengio and other experts also warned that L.L.M.s can learn unwanted and unexpected behaviors.

These systems can generate untruthful, biased and otherwise toxic information. Systems like GPT-4 get facts wrong and make up information, a phenomenon called “hallucination.”

Companies are working on these problems. But experts like Dr. Bengio worry that as researchers make these systems more powerful, they will introduce new risks.

Because these systems deliver information with what seems like complete confidence, it can be a struggle to separate truth from fiction when using them. Experts are concerned that people will rely on these systems for medical advice, emotional support and the raw information they use to make decisions.

“There is no guarantee that these systems will be correct on any task you give them,” said Subbarao Kambhampati, a professor of computer science at Arizona State University.

Experts are also worried that people will misuse these systems to spread disinformation. Because they can converse in humanlike ways, they can be surprisingly persuasive.

“We now have systems that can interact with us through natural language, and we can’t distinguish the real from the fake,” Dr. Bengio said.

Image
Oren Etzioni, the founding chief executive of the Allen Institute for AI, a lab in Seattle, said “rote jobs” could be hurt by A.I.Credit...Kyle Johnson for The New York Times

Experts are worried that the new A.I. could be job killers. Right now, technologies like GPT-4 tend to complement human workers. But OpenAI acknowledges that they could replace some workers, including people who moderate content on the internet.

They cannot yet duplicate the work of lawyers, accountants or doctors. But they could replace paralegals, personal assistants and translators.

A paper written by OpenAI researchers estimated that 80 percent of the U.S. work force could have at least 10 percent of their work tasks affected by L.L.M.s and that 19 percent of workers might see at least 50 percent of their tasks impacted.

“There is an indication that rote jobs will go away,” said Oren Etzioni, the founding chief executive of the Allen Institute for AI, a research lab in Seattle.

Some people who signed the letter also believe artificial intelligence could slip outside our control or destroy humanity. But many experts say that’s wildly overblown.

The letter was written by a group from the Future of Life Institute, an organization dedicated to exploring existential risks to humanity. They warn that because A.I. systems often learn unexpected behavior from the vast amounts of data they analyze, they could pose serious, unexpected problems.

They worry that as companies plug L.L.M.s into other internet services, these systems could gain unanticipated powers because they could write their own computer code. They say developers will create new risks if they allow powerful A.I. systems to run their own code.

“If you look at a straightforward extrapolation of where we are now to three years from now, things are pretty weird,” said Anthony Aguirre, a theoretical cosmologist and physicist at the University of California, Santa Cruz and co-founder of the Future of Life Institute.

“If you take a less probable scenario — where things really take off, where there is no real governance, where these systems turn out to be more powerful than we thought they would be — then things get really, really crazy,” he said.

Dr. Etzioni said talk of existential risk was hypothetical. But he said other risks — most notably disinformation — were no longer speculation.

”Now we have some real problems,” he said. “They are bona fide. They require some responsible reaction. They may require regulation and legislation.”

Cade Metz is a technology reporter and the author of “Genius Makers: The Mavericks Who Brought A.I. to Google, Facebook, and The World.” He covers artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas. More about Cade Metz

A version of this article appears in print on  , Section B, Page 5 of the New York edition with the headline: If Some Dangers Posed by A.I. Are Already Here, Then What Lies Ahead?. Order Reprints | Today’s Paper | Subscribe

Advertisement

SKIP ADVERTISEMENT