Space Force Turns to AI as Space Becomes More Crowded
The U.S. Space Force is actively exploring how artificial intelligence (AI) and automation can enhance—or even replace—human oversight in monitoring space for potential threats. Speaking at a Booz Allen Hamilton event, the service’s acting deputy chief of space operations for cyber and data highlighted the necessity of adapting to an increasingly congested space environment.

Patrick Tucker
Seth Whitworth, the acting deputy chief, emphasized that the days of a single airman monitoring a single satellite are long gone. The challenge now lies in determining how many space objects a single guardian can effectively manage, monitor, and track with the assistance of automated tools. “What trust level is required where I now can have one guardian reviewing 10, 20, 1,000? I don’t know what that number is. It’s the conversations that we’re having,” Whitworth noted.
Guardians are currently training automated tools using their activities and processes. The goal is to build trust in these tools and identify safe AI applications, such as in the creation of performance reports or in wargames. Whitworth mentioned that the Space Force is considering a “chat-like capability” to support guardians, to make the data analysis and object tracking easier to perform.
Nate Hamet, founder and CEO of Quindar, a Colorado-based startup, has already developed a chatbot for space operators. Hamet stated that this tool is based on the typical conversations their users are having, including questions about maneuver rules and propulsion systems. The sheer volume of space objects and the resulting data necessitates that the Space Force explore AI’s capabilities for other missions, such as tracking or satellite operation.
Pat Biltgen, a principal at Booz Allen Hamilton specializing in artificial intelligence, suggested that the military should consider enabling automated agents to take on more responsibility as monitoring and responding to attacks in the space domain become more challenging. Biltgen cited that over-reliance on a “human-on-the-loop” approach may address current concerns about human control, but it could prove counterproductive if human monitors become distracted by the overwhelming volume of data, potentially compromising safety. Biltgen described a scenario in which a tired, isolated human operator, overwhelmed with data, is still responsible for making a critical decision, further highlighting the advantage of AI in this context.
“We have to evaluate the risk of, where can we pull guardians out? Where is that trust level,” necessary to hand that role over to an automated agent, said Whitworth.