U.S. Military Expands AI Integration for Strategic Planning
Amid rising geopolitical tensions, the U.S. Department of Defense is accelerating its integration of artificial intelligence to enhance its strategic capabilities. The focus is on employing AI agents to simulate potential confrontations with foreign adversaries.
On Wednesday, the Defense Innovation Unit (DIU), a Department of Defense organization, awarded a prototype contract to Scale AI, a San Francisco-based company. The contract is for the development of Thunderforge, an AI platform designed to boost decision-making in battlefield scenarios.
“[Thunderforge] will be the flagship program within the DoD for AI-based military planning and operations,” Scale AI CEO Alexandr Wang stated on Wednesday via X.
Scale AI, founded in 2016 by Wang and Lucy Guo, focuses on expediting AI development by providing labeled data and the infrastructure needed to train AI models. For Thunderforge, Scale AI will collaborate with Microsoft, Google, and the American defense contractor Anduril Industries, according to Wang.
Initially, Thunderforge will be deployed to the U.S. Indo-Pacific Command, which covers the Pacific and Indian Oceans along with parts of Asia, and the U.S. European Command, which oversees Europe, the Middle East, the Atlantic Ocean, and the Arctic. The platform will support campaign strategy, resource allocation, and strategic assessments.
“Thunderforge brings AI-powered analysis and automation to operational and strategic planning, allowing decision-makers to operate at the pace required for emerging conflicts,” stated DIU Thunderforge Program Lead Bryce Goodman.
Agentic Warfare: A Paradigm Shift
This move towards AI-focused or “Agentic Warfare” signifies a transition from traditional warfare methods. In the traditional model, experts manually coordinate scenarios and make decisions over several days. “Agentic Warfare” will empower AI-driven models where decisions can be executed in minutes.
The reliability of AI in real-world defense applications poses substantial challenges, especially when confronted with unpredictable scenarios and ethical considerations.
“These AIs are trained on collected historical data and simulated data, which may not cover all the possible situations in the real world,” University of Southern California computer science professor Sean Ren explained to Decrypt. “Additionally, defense operations are high-stakes use cases, so we need the AI to understand human values and make ethical decisions, which is still under active research.”
Challenges and Safeguards
Ren, also the founder of Los Angeles-based decentralized AI developer Sahara AI, notes that constructing realistic AI-driven wargaming simulations presents significant challenges in terms of accuracy and adaptability.
“I think two key aspects make this possible: collecting a large amount of real-world data for reference when building wargaming simulations and incorporating various constraints from both physical and human aspects,” Ren said.
To create adaptive and strategic AI for wargaming simulations, Ren emphasizes the importance of employing training methods that allow the system to learn from experience and refine its decision-making over time.
“Reinforcement learning is a model training technique that can learn from the ‘outcome/feedback’ of a series of actions,” he said. “In wargaming simulations, the AI can take exploratory actions and look for positive or negative outcomes from the simulated environment. Depending on how comprehensive the simulated environment is, this is helpful for the AI to explore various situations exhaustively.”
The Expanding Role of AI in Military Strategy
The Pentagon is actively establishing more agreements with private AI firms such as Scale AI to fortify its capabilities, reflecting the growing significance of AI in military strategy.
While the notion of AI use by militaries may bring to mind images often associated with science fiction, military AI developers like San Diego-based Kratos Defense assert that such fears are unfounded.
“In the military context, we’re mostly seeing highly advanced autonomy and elements of classical machine learning, where machines aid in decision-making, but this does not typically involve decisions to release weapons,” Kratos Defense President of Unmanned Systems Division Steve Finley previously told Decrypt. “AI substantially accelerates data collection and analysis to form decisions and conclusions.”
A significant concern surrounding the integration of AI into military operations is ensuring robust human oversight in decision-making, especially in high-stakes scenarios.
“If a weapon is involved or a maneuver risks human life, a human decision-maker is always in the loop,” Finley said. “There’s always a safeguard—a ‘stop’ or ‘hold’—for any weapon release or critical maneuver.”