A new policy paper, authored by prominent figures in the AI industry, is urging the U.S. to reconsider its approach to developing superintelligent AI (AGI).
Former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks are the authors of the paper, titled “Superintelligence Strategy.” They argue against a “Manhattan Project”-style initiative to rapidly develop AGI, fearing that such an aggressive push could provoke retaliation from China.
“[A] Manhattan Project [for AGI] assumes that rivals will acquiesce to an enduring imbalance or omnicide rather than move to prevent it,” the authors write. They suggest that an exclusive bid for control of superintelligent AI systems could backfire, leading to cyberattacks and destabilizing international relations.
The paper directly challenges the idea, recently championed by a U.S. congressional commission and echoed by U.S. Secretary of Energy Chris Wright, of a government-backed program modeled after the atomic bomb program. Wright stated that the U.S. is at “the start of a new Manhattan Project” on AI. Schmidt, Wang, and Hendrycks believe that this approach is not the best way to compete with China.
Their perspective casts the situation as a potential “mutually assured destruction” scenario in the context of AI. They argue that, similar to how global powers avoid seeking monopolies over nuclear weapons, the U.S. should be cautious about racing toward dominating extremely powerful AI systems.
They also introduce the concept of Mutual Assured AI Malfunction (MAIM). This suggests that governments could proactively disable threatening AI projects to prevent adversaries from weaponizing AGI.
The authors propose a shift in focus from “winning the race to superintelligence” to developing defensive strategies to deter other countries from creating it. Their recommendations include expanding the arsenal of cyberattacks to target threatening AI projects controlled by other nations and limiting their access to advanced AI chips and open source models.
The paper identifies two opposing viewpoints in the AI policy world: the “doomers,” who advocate for slowing AI progress to avoid catastrophic outcomes, and the “ostriches,” who believe nations should accelerate AI development and hope for the best. The paper proposes a third way: a measured approach that prioritizes defensive strategies.
This stance is somewhat noteworthy because Schmidt has previously advocated for aggressive competition with China in the development of advanced AI. His recent op-ed about DeepSeek highlighted what he saw as a turning point in this AI race.
However, the authors emphasize that U.S. decisions regarding AGI will have global consequences. As the world watches the U.S. explore the limits of AI, Schmidt and his co-authors suggest a defensive approach may be wiser.