What exactly are the dangers posed by AI?
In late March, more than 1,000 technology leaders, researchers and other pundits working in and around artificial intelligence signed an open letter warning that AI technologies present “profound risks to society and humanity.”
The group, which included Elon Musk, Tesla’s CEO and the owner of Twitter, urged AI labs to halt development of their most powerful systems for six months so that they could better understand the dangers behind the technology.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.
The letter, which now has more than 27,000 signatures, was brief. Its language was broad. And some of the names behind the letter seemed to have a conflicting relationship with AI. Musk, for example, is building his own AI startup, and he is one of the primary donors to the organization that wrote the letter.
But the letter represented a growing concern among AI experts that the latest systems, most notably GPT-4, the technology introduced by San Francisco startup OpenAI, could cause harm to society. They believed future systems will be even more dangerous.
Some of the risks have arrived. Others will not for months or years. Still others are purely hypothetical.
“Our ability to understand what could go wrong with very powerful AI systems is very weak,” said Yoshua Bengio, a professor and AI researcher at the University of Montreal. “So we need to be very careful.”
Why are they worried?
Bengio is perhaps the most important person to have signed the letter.
Working with two other academics – Geoffrey Hinton, until recently a researcher at Google, and Yann LeCun, now chief AI scientist at Meta, the owner of Facebook – Bengio spent the past four decades developing the technology that drives systems like GPT-4. In 2018, the researchers received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
A neural network is a mathematical system that learns skills by analyzing data. About five years ago, companies such as Google, Microsoft and OpenAI began building neural networks that learned from huge amounts of digital text called large language models, or LLMs.
By pinpointing patterns in that text, LLMs learn to generate text on their own, including blog posts, poems and computer programs. They can even carry on a conversation.
This technology can help computer programmers, writers and other workers generate ideas and do things more quickly. But Bengio and other experts also warned that LLMs can learn unwanted and unexpected behaviors.
These systems can generate untruthful, biased and otherwise toxic information. Systems like GPT-4 get facts wrong and make up information, a phenomenon called “hallucination.”
Companies are working on these problems. But experts such as Bengio worry that as researchers make these systems more powerful, they will introduce new risks.
Short-term risk: Disinformation
Because these systems deliver information with what seems like complete confidence, it can be a struggle to separate truth from fiction when using them. Experts are concerned that people will rely on these systems for medical advice, emotional support and the raw information they use to make decisions.
“There is no guarantee that these systems will be correct on any task you give them,” said Subbarao Kambhampati, a professor of computer science at Arizona State University.
Experts are also worried that people will misuse these systems to spread disinformation. Because they can converse in humanlike ways, they can be surprisingly persuasive.
“We now have systems that can interact with us through natural language, and we can’t distinguish the real from the fake,” Bengio said.
Medium-term risk: Job loss
Experts are worried that the new AI could be job killers. Right now, technologies like GPT-4 tend to complement human workers. But OpenAI acknowledges that they could replace some workers, including people who moderate content on the internet.
They cannot yet duplicate the work of lawyers, accountants or doctors. But they could replace paralegals, personal assistants and translators.
A paper written by OpenAI researchers estimated that 80% of the US workforce could have at least 10% of their work tasks affected by LLMs and that 19% of workers might see at least 50% of their tasks impacted.
“There is an indication that rote jobs will go away,” said Oren Etzioni, the founding CEO of the Allen Institute for AI, a research lab in Seattle.
Long-term risk: Loss of control
Some people who signed the letter also believe artificial intelligence could slip outside our control or destroy humanity. But many experts say that’s wildly overblown.
The letter was written by a group from the Future of Life Institute, an organization dedicated to exploring existential risks to humanity. They warn that because AI systems often learn unexpected behavior from the vast amounts of data they analyze, they could pose serious, unexpected problems.
They worry that as companies plug LLMs into other internet services, these systems could gain unanticipated powers because they could write their own computer code. They say developers will create new risks if they allow powerful AI systems to run their own code.
“If you look at a straightforward extrapolation of where we are now to three years from now, things are pretty weird,” said Anthony Aguirre, a theoretical cosmologist and physicist at the University of California, Santa Cruz, and co-founder of the Future of Life Institute.
“If you take a less probable scenario – where things really take off, where there is no real governance, where these systems turn out to be more powerful than we thought they would be – then things get really, really crazy,” he said.
Etzioni said talk of existential risk was hypothetical. But he said other risks – most notably disinformation – were no longer speculation.
“Now we have some real problems,” he said. “They are bona fide. They require some responsible reaction. They may require regulation and legislation.”
This article originally appeared in The New York Times.