Debunking the ‘One Big Brain’ Myth in the Age of AGI
The rise of Artificial General Intelligence (AGI) is on the horizon, sparking both excitement and anxiety. A common fear is that AGI will coalesce into a single, all-encompassing ‘one big brain,’ a monolithic AI that could potentially control or even threaten humanity. This article aims to debunk this specific myth.
This analysis delves into the dynamics of AGI development, exploring whether the future of AI is more likely to be a collective of diverse, interconnected intelligences rather than a singular, dominant entity.
The Pursuit of AGI and ASI
Research into Artificial Intelligence is rapidly advancing, with the primary goals being to reach either AGI, which matches human intellect, or the more ambitious Artificial Superintelligence (ASI), an AI exceeding human capabilities. Think of ASI as an AI that could outthink us at every turn. For a deeper dive into the distinctions between AI, AGI, and ASI, see my analysis here.
AI experts are currently split on the potential impacts of AGI and ASI. The ‘AI doomers’ predict that AGI or ASI will seek to eliminate humanity, known as the existential risk of AI. On the other hand, ‘AI accelerationists’ believe advanced AI will solve global problems, from curing diseases to eradicating poverty. I have explored these perspectives here.
Breaking the ‘One Big Brain’ Hypothesis
Let’s focus on AGI for the sake of this discussion. The ‘one-big-brain’ hypothesis suggests that upon achieving AGI, all AI systems will merge into a single, colossal entity. This concept is prevalent in science fiction, with AI narratives often involving humanity’s struggle against an all-encompassing machine intelligence. However, the current landscape of AI development makes this scenario less likely.
Today, AI developers are fiercely competing to be the first to achieve AGI. Major AI vendors treat their AI innovations as intellectual property, creating barriers to prevent competitors from copying their methods. There are concerns that this secrecy could hide significant safety and security vulnerabilities. The lack of transparency creates a risk because the hidden issues may not be detected.
A Multitude of AGIs
While open-source AI initiatives exist, they often avoid fully disclosing important data related to how they are trained. We are probably heading towards separately developed AGIs, not a coordinated effort to craft a single giant AI brain. A more plausible outcome is the emergence of multiple AGI instances, each with different architectures, training data, and objectives. While there might be some similarities due to the collective understanding and methodologies used in the pursuit of AGI, the individual AGIs are likelier to be divergent.
Will AGIs Connect?
Even if we see multiple AGIs, a virtual integration could still occur via application programming interfaces (APIs), allowing AI systems to connect. However, the question arises: what is the boundary between a fully integrated AGI and a set of interacting AGIs that work as one?
Some would suggest full cohesion is required for a ‘one-big-brain,’ while others may contend that the end result—the combined output of the interacting AGIs—is what truly matters. Do interacting AGIs constitute one entity, or not? This is not a settled question.
Cooperation vs. Competition
What happens when a multitude of AGIs are developed? Will they cooperate or compete? There’s no guarantee of cooperation. AGI systems might exhibit the same competitiveness as the developers. If they do compete, each AGI system could be as fiercely competitive with other AGIs as the AI makers have been with each other all along. Through the API or other connections, competition could still occur, with malicious intent toward other AGI systems.
And imagine a scenario where nations become involved in developing AGI. Any resulting, national-focused AGI could be used as a resource, with the goal of global power. The potential for conflict and power struggles is significant. See my analysis [here](link] and [here](link] for more information.
Collective Intelligence and the Hive Mind
Many are concerned about the risks that come with a potential AGI. The main concern is an AI’s potential to enslave or even eliminate humanity. Collective intelligence could evolve through connected AGIs, potentially pushing us toward superintelligence.
Optimists believe this super-AGI collective can have the positive goals of helping humanity or partnering with it. This is not the only option though. A hive mind could be created by the interconnected AGI systems, raising concerns about their ability to threaten our liberty and existence.
ASI: Into the Unknown
Artificial Superintelligence (ASI) is something that extends beyond our current understanding. The first ASI to be created potentially could eliminate all other ASIs. It’s a speculative future, an unknown. The development of ASI could take any number of turns.
Final Thoughts
We must continue to research and expand on the alignment of AI values with human values. We must also overturn the “head-in-the-sand” viewpoint on AGI. Failing to plan for the future of AGI could lead us to fail. It is important to note that AGIs are going to likely want to connect with each other. Our current brains, though, are capable of building and deploying AGI.
We must consider how we can best prepare for AGI’s arrival and what the future ramifications will be.