leading figures in AI developers including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis have signed a statement warning that the technology they are building may one day pose an existential threat to humanity comparable to nuclear war and pandemics.
“Mitigating the risk of extinction from AI should be a global priority along with other societal risks, such as pandemics and nuclear war,” reads a one-sentence statement, released today by the Center for AI Safety, a nonprofit organization.
Philosophers have long debated the idea that artificial intelligence might become unmanageable, destroying humanity either accidentally or deliberately. But in the past six months, after some surprising and alarming jumps in the performance of AI algorithms, the problem has become more widespread and serious.
In addition to Altman and Hasabis, the statement was signed by Dario Amoudi, CEO of Anthropic, a startup dedicated to developing artificial intelligence with a focus on safety. Other signatories include Jeffrey Hinton and Joshua Bengio — two of three academics to have received the Turing Prize for their work on deep learning, the technology that underpins recent advances in machine learning and artificial intelligence — as well as dozens of entrepreneurs and researchers working on cutting-edge AI technologies. problems.
“The statement is a great initiative,” says Max Tegmark, professor of physics at MIT and director of the Future of Life Institute, a nonprofit focused on the long-term risks posed by AI. In March, the Tegmark Institute published a letter calling for a six-month pause on developing cutting-edge AI algorithms so that risks can be assessed. The letter was signed by hundreds of AI researchers and executives, including Elon Musk.
Tegmark says it hopes the statement will encourage governments and the general public to take the existential risks of AI more seriously. “The ideal outcome is to generalize the threat of extinction to AI, enabling everyone to discuss it without fear of ridicule,” he adds.
Dan Hendricks, director of the Center for Artificial Intelligence Security, compared the current moment of concern about AI to the debate among scientists sparked by the creation of nuclear weapons. “We need to have the conversations that nuclear scientists were having before the atomic bomb was built,” Hendrix said in a quote released with his organization’s statement.
The current beep is associated with several leaps in the performance of artificial intelligence algorithms known as large language models. These models consist of a specific type of artificial neural network that is trained on huge amounts of human-typed text to predict which words should follow a certain sequence. When fed enough data, and additional training in the form of feedback from humans on good and bad answers, these language models are able to generate text and answer questions with remarkable eloquence and clear knowledge—even if their answers are often riddled with errors.
These language models have proven to be increasingly coherent and capable as they are equipped with more data and computer power. The most powerful model created to date, OpenAI’s GPT-4, is capable of solving complex problems, including those that seem to require some form of abstraction and logical reasoning.