Close Menu
Breaking News in Technology & Business – Tech Geekwire

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    TAC InfoSec’s Net Profit More Than Doubles to Rs 15 Cr in FY25; Revenue Jumps 2.6 Times

    May 11, 2025

    York County School of Technology Students Shine at 2025 Prom

    May 11, 2025

    Pinnacle Live Revolutionizes Event Technology with Prestigious Hotel Partnerships

    May 11, 2025
    Facebook X (Twitter) Instagram
    Breaking News in Technology & Business – Tech GeekwireBreaking News in Technology & Business – Tech Geekwire
    • New
      • Amazon
      • Digital Health Technology
      • Microsoft
      • Startup
    • AI
    • Corporation
    • Crypto
    • Event
    Facebook X (Twitter) Instagram
    Breaking News in Technology & Business – Tech Geekwire
    Home » Xi and Biden’s AI Nuclear Pact: Lessons from the Cold War
    AI

    Xi and Biden’s AI Nuclear Pact: Lessons from the Cold War

    techgeekwireBy techgeekwireMarch 6, 2025No Comments5 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email

    In a significant step towards safeguarding global security, Chinese President Xi Jinping and U.S. President Joe Biden agreed in late 2024 that artificial intelligence (AI) should never be given the authority to initiate a nuclear war. This critical policy decision built upon years of discussions within the Track II U.S.-China Dialogue on Artificial Intelligence and National Security, a collaborative initiative spearheaded by the Brookings Institution and Tsinghua University’s Center for International Security and Strategy.

    To understand the gravity of this agreement, one can examine historical events. Looking back at the U.S.-Soviet rivalry during the Cold War reveals scenarios that could have played out—and the potentially disastrous results—had AI been in charge of nuclear launch or preemptive strike decisions. Considering the doctrines and procedures of the era, an AI system, perhaps “trained” on hypothetical situations reflecting the prevailing wisdom might have mistakenly initiated a nuclear strike.

    Fortunately, in the crises to be discussed (the 1962 Cuban missile crisis, the September 1983 false-alarm crisis, and the October 1983 Able Archer exercise), human judgment prevailed, demonstrating a greater understanding of the stakes than the strategies of the day. While recognizing that humans are fallible, and that AI systems could provide useful inputs to human decisions, the potential for cold, calculated, and potentially catastrophic actions by a machine is a sobering lesson.

    Three Close Calls that Highlight the Need for Human Oversight

    The Cuban Missile Crisis (1962)

    The Cuban missile crisis began when U.S. intelligence detected the Soviet Union’s deployment of nuclear-capable missiles and tactical nuclear weapons to Cuba in 1962. This move was intended to shift the nuclear balance. Despite uncertainty about the full extent of Soviet weaponry already on Cuban soil, President John F. Kennedy’s advisors, including the Joint Chiefs of Staff, recommended conventional air strikes against the Soviet positions. These strikes carried the risk of escalating the conflict, potentially involving nearby Soviet submarine commanders armed with nuclear torpedoes against U.S. warships, or Soviet ground troops in Cuba against the U.S. base at Guantanamo Bay.

    Michael E. O’Hanlon, author of the original article, Director of Research – Foreign Policy, and Senior Fellow at the Brookings Institution.

    President Kennedy opted instead for a combination of a naval quarantine of Cuba to prevent further weaponry from reaching the island and quiet diplomacy with Soviet Premier Nikita Khrushchev. This approach included offers to remove American missiles from Turkey and to commit to not invade Cuba. The Soviets agreed to this compromise, withdrawing their missiles and nuclear weapons and halting further military buildup on the island, then run by Fidel Castro’s government.

    The September 1983 False-Alarm Crisis

    A pivotal moment arrived in the September 1983 false-alarm crisis. Soviet watch officer Stanislav Petrov saw indications from sensor systems that the United States was launching an attack on the Soviet Union with five intercontinental ballistic missiles (ICBMs) that would detonate within roughly 20 minutes. These sensors, however, had misinterpreted reflections of sunlight from unusual cloud formations, failing to accurately identify the source.

    Petrov made the extraordinary decision to not retaliate against the perceived American strike immediately. This decision, contrasting with doctrine that called for immediate retaliation to any incoming strike, was a critical calculation of a difficult situation. Petrov’s human instincts and character appear to have prevented the unimaginable.

    The October 1983 Able Archer Exercise

    Only a couple of months later, in November 1983, NATO conducted the major military exercise known as Able Archer during a particularly tense period in U.S.-Soviet relations. Concerns ran high due to President Ronald Reagan’s “Star Wars” speech the prior March, in which the Soviet Union was declared an evil empire. Additionally, the downing of Korean Air Lines Flight 007 by Soviet pilots in September, which caused significant tension, was a factor.

    During the exercise, NATO forces simulated preparations for a nuclear attack, including the positioning of dummy warheads on nuclear-capable aircraft. Soviet intelligence witnessed this preparation but could not discern the warheads were fake. Thus, Soviet leaders readied their nuclear-capable systems with real warheads. American intelligence observed these preparations, but a U.S. Air Force general, Leonard Perroots quickly understood the situation and recommended against responding by placing real warheads on American systems.

    Could AI have done Better?

    In all three of these situations, AI could have made decisions that might have initiated nuclear war. During the Cuban Missile Crisis, the prevalent American sentiment was to protect the Western Hemisphere and prevent any communist expansion. The positioning of Soviet weapons posed an unacceptable risk, which caused fear. Since sensors could not determine the absence of nuclear warheads, a cautious approach according to the doctrines of the day would have leaned toward eliminating those Soviet capabilities before they became operational. It took a human, President Kennedy, with experience from World War II and a keen understanding of bureaucratic errors, to act differently.

    In the September 1983 false-warning crisis it took a human to realize the unlikelihood of a small attack originating from the United States. A different officer, or an AI-directed control center, might have assumed the five ICBMs were attempting a decapitation strike leading to retaliation.

    During Able Archer, an AI system, programmed with the prevailing doctrine of the time might have recommended a nuclear alert. With the superpowers’ plans for massive first strikes, the situation could have become catastrophic. It is possible that well-programmed AI could have recommended restraint. But the examples highlight the danger of entrusting the most important decision in human history to a machine.

    Xi and Biden’s decision was the correct one, and their successors should maintain this policy.

    AI Cold War Cuban Missile Crisis Joe Biden nuclear weapons Xi Jinping
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    techgeekwire
    • Website

    Related Posts

    TAC InfoSec’s Net Profit More Than Doubles to Rs 15 Cr in FY25; Revenue Jumps 2.6 Times

    May 11, 2025

    York County School of Technology Students Shine at 2025 Prom

    May 11, 2025

    Pinnacle Live Revolutionizes Event Technology with Prestigious Hotel Partnerships

    May 11, 2025

    Cookie Policy Notice

    May 11, 2025

    Microsoft’s Controversial AI Recall Feature Sparks Privacy Concerns

    May 11, 2025

    Bishop-McCann Introduces Joy Index™ to Measure Event Engagement with AI Technology

    May 11, 2025
    Leave A Reply Cancel Reply

    Top Reviews
    Editors Picks

    TAC InfoSec’s Net Profit More Than Doubles to Rs 15 Cr in FY25; Revenue Jumps 2.6 Times

    May 11, 2025

    York County School of Technology Students Shine at 2025 Prom

    May 11, 2025

    Pinnacle Live Revolutionizes Event Technology with Prestigious Hotel Partnerships

    May 11, 2025

    Cookie Policy Notice

    May 11, 2025
    Advertisement
    Demo
    About Us
    About Us

    A rich source of news about the latest technologies in the world. Compiled in the most detailed and accurate manner in the fastest way globally. Please follow us to receive the earliest notification

    We're accepting new partnerships right now.

    Email Us: info@example.com
    Contact: +1-320-0123-451

    Our Picks

    TAC InfoSec’s Net Profit More Than Doubles to Rs 15 Cr in FY25; Revenue Jumps 2.6 Times

    May 11, 2025

    York County School of Technology Students Shine at 2025 Prom

    May 11, 2025

    Pinnacle Live Revolutionizes Event Technology with Prestigious Hotel Partnerships

    May 11, 2025
    Categories
    • AI (1,970)
    • Amazon (793)
    • Corporation (747)
    • Crypto (873)
    • Digital Health Technology (784)
    • Event (419)
    • Microsoft (946)
    • New (7,046)
    • Startup (814)
    © 2025 TechGeekWire. Designed by TechGeekWire.
    • Home

    Type above and press Enter to search. Press Esc to cancel.