What just happened? It's a case of another day, another warning about the possibility of AI causing the extinction of the human race. On this occasion, it's been compared it to the risks posed by nuclear war and pandemics. The statement comes from experts in the field and those behind these systems, including OpenAI CEO Sam Altman.

The worryingly titled AI Extinction Statement from the Center for AI Safety (CAIS) is signed by CEOs from top AI labs: Altman, Google DeepMind's Demis Hassabis, and Anthropic's Dario Amodei. Other signatories include authors of deep learning and AI textbooks, Turing award winners such as Geoffrey Hinton (who left Google over his AI fears), executives from top tech companies, scientists, professors, and more.

The statement is only a single sentence: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Some of the AI risks listed by CAIS include perpetuating bias, powering autonomous weapons, promoting misinformation, and conducting cyberattacks. The organization writes that as AI becomes more advanced, it could pose catastrophic or existential risks.

Weaponization is a particular concern for CAIS. It writes that malicious actors could repurpose AI to be highly destructive. An example given is machine learning drug-discovery tools being used to build chemical weapons.

Misinformation generated by AI could make society less equipped to handle important changes, and these systems could find novel ways to pursue their goals at the expense of individual and societal values. There's also the risk of enfeeblement, in which individuals become totally dependent on machines. CAIS compares this scenario to the one in WALL-E.

"The world has successfully cooperated to mitigate risks related to nuclear war. The same level of effort is needed to address the dangers posed by future AI systems," CAIS director Dan Hendrycks told The Register.

It's not just those involved in AI who fear where the technology could lead. Over two-thirds of Americans believe it could threaten civilization, even though most US adults have never used ChatGPT. We also heard Warren Buffett compare AI to the creation of the atomic bomb.

The inclusion of Altman on the list of signatories comes after OpenAI called for the creation of a global watchdog to oversee AI development. Some claim, however, that his plea for regulation is only to stifle rivals' innovation and help his company stay on top.

Back in March, a group of the world's top tech minds signed a letter asking for a six-month pause on advanced AI development. Elon Musk was one of the signatories, but his name is missing from the CAIS statement. Altman confirmed in April that OpenAI was not currently training GPT-5.

Warnings over AI destroying humanity have been around for years, but the advancement of these systems in 2023 has fuelled fears like never before. Constant statements like this latest one are doing little to help alleviate the public's concerns, but there's a good reason why regulation is needed, and companies are unlikely to implement it themselves without pressure.