The Space Force is looking to use artificial intelligence to identify and track objects in space as the space environment grows more crowded, according to comments made by the service’s acting deputy chief of space operations for cyber and data at a recent event.
Gone are the days when a single airman could easily monitor one satellite or space object to ensure it was behaving properly and not on a collision course, said Seth Whitworth. Now, a key question is how many space objects a single guardian can manage simultaneously with the aid of automation.
“What trust level is required where I now can have one guardian reviewing 10, 20, 1,000? I don’t know what that number is. It’s the conversations that we’re having,” Whitworth noted.
Guardians are currently training automated tools on their existing processes to build trust and identify appropriate AI applications. This includes using AI to assist with tasks like generating performance reports and participating in wargames. Establishing this trust, however, will require a meticulous and continuous effort.
“If you go out and talk to guardians, I think they would say that they want a chat-like capability,” Whitworth said.
Nate Hamet, founder and CEO of Quindar, a Colorado startup that creates AI tools for monitoring space missions, said his company has developed a type of chatbot for space operators based on “the conversations that we see our users” having. These conversations include questions about the rules for particular maneuvers, propulsion systems for different objects, and more.
The increasing number of space objects, along with the data generated about them, mean the Space Force must also consider ways to deploy AI for other missions, like tracking or flying satellites.
“We have to evaluate the risk of, where can we pull guardians out? Where is that trust level,” Whitworth said, referencing the point at which the service can hand over control to an automated agent.
Pat Biltgen, a principal at Booz Allen Hamilton specializing in artificial intelligence, emphasized that the increasing challenge of monitoring the space domain and responding to potential attacks requires the military to consider a more significant role for automated agents. He suggested that relying on a “human-on-the-loop” to monitor automated systems might address current concerns about human control. However, as AI becomes more robust, interconnected, and safe, distracted human monitors could become a liability.
Biltgen described a “worst nightmare” scenario in which reliance on human judgment becomes a weakness. In this scenario, a tired, isolated human operator is presented with overwhelming amounts of data, making it difficult to make the best decision, but still retaining the ultimate authority. “There’s this massive, multi-billion dollar system …It’s all designed to work together in network-centric warfare. And there’s one person on Christmas Eve that is the lowest-ranked person, stuck with the least amount of leave, that has a big red button that says, ‘Turn off the United States.’”
