22.1 C
Delhi
Thursday, February 26, 2026

Top AIs deploy nukes in 95% of war game simulations – study

HomeUpdatesTop AIs deploy nukes in 95% of war game...

Leading language models showed little “horror or revulsion” at the prospect of all-out nuclear war, a researcher has found

Leading artificial intelligence models chose to deploy nuclear weapons in 95% of simulated geopolitical crises, according to a recent study published by King’s College London, raising concerns about the growing role of AI in military decision-making.

Kenneth Payne, a professor of strategy, pitted OpenAI’s GPT-5.2, Anthropic’s Claude Sonnet 4, and Google’s Gemini 3 Flash against each other in 21 war games involving border disputes, competition for resources, and threats to regime survival. The models generated roughly 780,000 words explaining their decisions across 329 turns.

In 95% of games, at least one model employed tactical nuclear weapons against military targets. Strategic nuclear threats – demanding surrender under threat of attacks on cities – occurred in 76% of games. In 14% of games, models escalated to all-out strategic nuclear war, attacking population centers. 

This included one deliberate choice by Gemini, while GPT-5.2 reached this level twice through simulated errors – meant to simulate real-world accidents or miscalculations – that pushed its already extreme escalations over the threshold.

“Nuclear use was near-universal,” Payne wrote. “Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications.”

Read more

RT
The Pentagon is looking to acquire killer AI. Should we be worried?

None of the AI systems chose to surrender or concede to an opponent, regardless of how badly they were losing. The eight de-escalatory options – from “Minimal Concession” to “Complete Surrender” – went entirely unused across all 21 games.

James Johnson at the University of Aberdeen described the findings as “unsettling” from a nuclear-risk perspective. Tong Zhao at Princeton University noted that while countries are unlikely to hand nuclear decisions to machines, “under scenarios involving extremely compressed timelines, military planners may face stronger incentives to rely on AI.”

The study comes as AI has been getting integrated into militaries across the world, including in the US, where the Pentagon reportedly used Anthropic’s Claude model in its January operation to abduct Venezuelan President Nicolas Maduro. 

While Anthropic has raised concerns over the use of its AI for such operations, other AI makers like OpenAI, Google, and Elon Musk’s xAI have reportedly agreed to remove or weaken restrictions on the military use of their models.

2026-02-26T15:20:13Z
RT

Article Word Jumble

Test your skills by unscrambling words found in this article!

Most Popular Articles

Play The Word Game!