'Extinction risk': AI titans raise alarm on technology's dangerous potential
This concise alert marks the latest in a string of forewarnings from leading experts about the potential for AI to generate societal chaos – from propagating misinformation to inciting economic instability via job losses, or even outright assaults on humanity.
A collective of renowned artificial intelligence (AI) pundits and top brass have issued a dire warning in a joint statement on Tuesday: AI may present an "extinction risk."
OpenAI's lead, Sam Altman, creator of ChatGPT, and the so-called "Godfather of AI," Geoffrey Hinton, joined a team of over 350 notable figures in cautioning against the existential threats posed by AI.
The declaration, organised by the nonprofit Center for AI Safety, summed up their apprehensions in a succinct 22-word statement: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
This concise alert marks the latest in a string of forewarnings from leading experts about the potential for AI to generate societal chaos – from propagating misinformation to inciting economic instability via job losses, or even outright assaults on humanity. With the soaring popularity of OpenAI's ChatGPT product, scrutiny on AI has intensified.
Recent incidents have underscored these concerns, such as an AI-generated photo of a fictitious Pentagon explosion that triggered a brief but impactful stock market sell-off, obliterating billions in value before the hoax was debunked.
The Center for AI Safety intended the brief statement to "open up discussion" on the wide array of critical and urgent risks associated with AI. High-profile signatories included Google DeepMind's chief, Demis Hassabis, and Anthropic CEO, Dario Amodei.
Altman, Hassabis, and Amodei were among a select group of experts who recently consulted with President Biden on potential AI risks and regulations, the New York Post reported. Hinton, along with another signatory, Yoshua Bengio, won the 2018 Turing Award – the top honour in computing – for their significant strides in neural networks, hailed as "major breakthroughs in artificial intelligence."
"Mitigating the risk of extinction from AI will require global action," emphasized Dan Hendrycks, director of the Center for AI Safety. He paralleled the requisite effort needed to tackle AI risks with past successful cooperation in mitigating nuclear war risks.
Despite his leadership role at OpenAI, Altman has been openly concerned about the unchecked development of advanced AI systems, even advocating for government regulation for the technology.
In a similar vein, Hinton recently left his part-time AI research role at Google, enabling him to openly voice his concerns. He admitted some regret for his life's work, fearing its potential misuse by "bad actors."
The latest statement was notably shorter than a previous open letter, backed by figures including billionaire Elon Musk, who urged for a six-month halt in advanced AI development to devise safety measures. Musk has even suggested a "non-zero chance" of AI "going Terminator," alluding to the dystopian scenario from the 1984 sci-fi film.
Echoing Musk's sentiment, former Google CEO Eric Schmidt also warned that AI could soon evolve into an "existential risk" to humanity with the potential to cause harm or death to countless individuals.
Big Tech is censoring us. Sign up so we can always stay in touch.