The Nuclear-Level Risk of Superintelligent AI
China’s recent advancements in artificial intelligence, exemplified by the DeepSeek R1 model, represent a significant moment in the global AI race. As underscored by President Donald Trump, this development serves as a “wake-up call” for the United States. The stakes are high, encompassing not only economic competitiveness but potentially the most geopolitically sensitive technology since the atomic bomb.
In the wake of Oppenheimer’s creation of the atomic bomb during that era, America’s technological dominance lasted for approximately four years before the Soviet Union achieved parity. The resulting balance of terror, coupled with the unprecedented destructive capability of these new weapons, led to the concept of mutual assured destruction (MAD). This deterrence strategy, while flawed, successfully averted large-scale conflict for decades. The inherent risk of nuclear retaliation discouraged either side from initiating a first strike, fostering a tense but ultimately stable standoff.

Today’s AI competition presents potentially even greater complexity than the nuclear age. This is largely due to AI’s broad applicability across various sectors, including medicine, finance, and defense. Furthermore, the advent of powerful AI may facilitate the automation of AI research, potentially giving the first nation to possess it an increasing advantage in both defensive and offensive capabilities.
A nation on the verge of wielding superintelligent AI—an AI that significantly surpasses human intelligence across most domains—would constitute a national security emergency for its rivals. These rivals might then resort to threatening sabotage rather than concede power.
If we are indeed progressing towards a world characterized by superintelligence, it is imperative that we understand its potential to destabilize geopolitics. A new paper released this week outlines some of the potential geopolitical implications of advanced AI, proposing a cohesive “Superintelligence Strategy”.
Consider how the United States might reasonably respond to rival nations seeking an insurmountable AI advantage. Imagine that Beijing establishes a lead over American AI labs, achieving the cusp of recursively-improving superintelligence ahead of the U.S. Regardless of whether Beijing could maintain ultimate control, it is clear that U.S. national security would be critically threatened. Logically, The U.S. might then threaten cyberattacks on AI datacenters to impede China’s progress.
Likewise, it can be expected that Xi Jinping — or even Vladimir Putin, who is unlikely to gain AI Supremacy first — would mirror this response in a similar manner if he approached recursively-improving superintelligence. They would not stand idly by if a U.S. monopoly on power was imminent.
Just as the perilous pursuit of nuclear monopoly yielded to the relative stability of MAD, we may soon see a parallel deterrence dynamic emerge within the context of AI. If any nation attempting to achieve AI supremacy faces the threat of preemptive sabotage, those nations may find themselves deterred from pursuing unilateral power. This outcome, in turn, is what we call Mutual Assured AI Malfunction (MAIM).
As nations recognize this possibility, it is expected that MAIM will become the governing regime, and the U.S. must begin to prepare to act under these new strategic realities. MAIM exists as a deterrence framework designed to maintain strategic advantage deter escalation and restrict the plans of rivals and malicious actors.
For MAIM to function, the U.S. must make absolutely clear that any rival destabilizing AI project, particularly those aiming for superintelligence, will be met with retaliation. In this instance, offense — or, at the very least, the credible threat of offense — will likely prove to be the best defense. This involves expanding U.S. cyberattack capabilities and enhancing surveillance of adversary AI programs.
While constructing this deterrence framework, the U.S. must simultaneously make advances on two additional fronts, namely AI nonproliferation and domestic competitiveness. With regards to nonproliferation, the U.S. should implement more stringent AI chip export controls and monitoring to prevent the inflow of computing power into dangerous hands.
The approach to AI chips should be similar to uranium, involving detailed record-keeping of product movements, limitations on what high-end AI chips are authorized to do, and providing federal agencies with the authority to track and terminate illicit distribution channels.
Finally, to maintain a competitive edge, the U.S. should focus relentlessly on building resilience in the supply chains for military tech and computing power. The U.S.’s dependency on Taiwan for AI chips represents a critical vulnerability and a potential chokepoint. Even though the West currently holds a distinct advantage in AI chips, increased competition from China could potentially disrupt that dynamic. The U.S. should, for these reasons, enhance domestic design and manufacturing capabilities.
Superintelligent AI presents a challenge that is as elusive as those faced by policymakers in the past. It constitutes what theorists Horst Rittel and Melvin Webber termed a “wicked” problem: one that constantly evolves and lacks a definitive formula for resolution. MAIM, supplemented by strengthened nonproliferation and renewed investment in American industry, offers a strategy rooted in the lessons learned from past arms races. There exists no purely technical “fix” that can regulate these forces, but the proper combination of deterrence, nonproliferation, and competitiveness measures can assist the United States in navigating the developing geopolitical landscape shaped by superintelligence.