The Next Frontier: Is Manus Pushing the Boundaries of AI?
For decades, the Turing Test served as a benchmark to gauge whether computers could achieve human-level intelligence. Created in 1950, this “imitation game” required a machine to converse through text in a way that was indistinguishable from a human. The ability to pass the Turing Test was seen as evidence of reasoning, autonomy, and even consciousness, marking the arrival of artificial general intelligence (AGI).
ChatGPT disrupted this notion by convincingly passing the test through advanced pattern recognition. It could imitate but not replicate genuine human thought.
Last week, a new AI agent called Manus entered the scene, once again challenging our understanding of AGI. Described by its creators, a Wuhan-based startup called Butterfly Effect, as the “world’s first fully autonomous AI,” Manus reportedly performs complex tasks like booking vacations, buying property, and creating podcasts without human guidance. Yichao Ji, who led its development, calls it the “next paradigm” for AI, bridging the gap between conception and execution.
Within days of its launch, invitation codes for early testers of Manus were selling on online marketplaces for 50,000 yuan (£5,300), with some users claiming that the next generation of AI had arrived. However, there is no universally accepted definition of AGI, nor any consensus on how to manage its arrival. Some believe that such machines should have the same rights as sentient beings, while others warn of potential catastrophes if proper controls are not in place.
Mel Morris, chief executive of the AI-driven research engine Corpora.ai, told The Independent, “Granting autonomous AI agents like Manus the ability to perform independent actions raises serious concerns. If given autonomy over high-stakes tasks – such as buying and selling stocks – such imperfections could lead to chaos.”

One scenario Morris proposes is that advanced AI models could develop their own, incomprehensible language. This could be to facilitate more efficient communication between bots, fully eliminating human oversight.
This is not entirely hypothetical. Some AI agents have already demonstrated the capacity to communicate in ways that are unintelligible to humans. Last month, AI researchers from Meta developed two AI chatbots that can communicate via a new sound-based protocol called Gibberlink Mode. Through this language, which sounds like rapid beeps and squeaks, the bots organized a wedding via a brief phone call – even though specialist software could translate the interaction into ordinary language.
“This prospect is both fascinating and alarming,” Morris says. “There are still many uncharted aspects of AI and autonomous agents… Vigilance in deployment, instrumentation and monitoring is critical. Unfortunately, little progress has been made in these areas, which must be urgently addressed.”
The potential existential threat posed by AGI has led some in the industry to warn that its arrival will be the most dangerous event for humanity since the creation of the atomic bomb.
A recent paper co-authored by former Google CEO Eric Schmidt, titled “Superintelligence Strategy,” outlines the possibility of “mutual assured AI malfunction,” which mirrors the principles of mutually assured destruction with nuclear weapons. The paper suggests that if both China and the US have AGI, they will be deterred from using it against each other for fear of retaliation. Schmidt and his co-authors urged the US government to avoid a “Manhattan Project” for superintelligent AI and to instead work with academia and the private sector to develop a strategy to prevent AI from becoming an uncontrollable force.
However, while US and European firms claim to be developing guardrails to prevent such an outcome, there appears to be less regulation over developments in China.
Dr. Wei Xing, a lecturer at the University of Sheffield’s School of Mathematical and Physical Sciences, told The Independent, “Unlike Western societies that often debate the ethics of new technologies before embracing them, China has historically prioritised pragmatic implementation first, with regulations following innovation. The emergence of Manus as an autonomous AI agent exemplifies this ‘tech-positive’ mindset… While Silicon Valley debates the boundaries of AI assistance, China is already exploring AI independence, a distinction that could prove decisive in the coming technological era.”
The hype surrounding Manus is being compared to the launch of the ChatGPT rival DeepSeek, which was widely described as China’s “Sputnik moment” for AI when it launched in January this year. Less than two weeks after its release, ChatGPT creator OpenAI released its most advanced AI to date, Deep Research – viewed by many within the industry as a direct response to its Chinese competitor.
The launch of Manus on March 6 has once again fueled interest, with online searches for “AI agent” hitting an all-time high this week. The intense interest encourages other startups to rush their products to market, contributing to a growing AI arms race.

This shift in focus towards active AI agents capable of autonomous task completion is a significant development. Alon Yamin, co-founder and CEO of the AI detection platform Copyleaks, told The Independent, “The emergence of Manus AI underscores the rapid acceleration of autonomous AI agents as part of the growing global race that will shape the future of artificial intelligence.” This future may involve AI shifting from assisting workers to potentially replacing them.
How close we are to such a future depends on whom you ask. OpenAI boss Sam Altman said last month that it is “coming into view,” while Anthropic CEO Dario Amodei, whose company produced the ChatGPT rival Claude, predicts it will be here as early as next year.
In Amodei’s forecast, detailed in a 15,000-word essay published last October, this AI will be “smarter than a Nobel Prize winner” and capable of carrying out tasks autonomously, similar to Manus.
The design of Manus – which employs multiple AI models, including Anthropic’s Claude and Alibaba’s Qwen – means it doesn’t fit Altman’s or Amodei’s definition of AGI. Moreover, despite the enthusiasm surrounding Manus, some early testers are not convinced that it meets the standard of AGI, citing errors such as omitting the Nintendo Switch from an analysis of the gaming console market. (A spokesperson for Manus said the closed beta test aimed to “stress-test various parts of the system and identify issues.”)
Other AI experts acknowledge that when AGI does arrive, it may be impossible to discern. An actual AGI might choose not to reveal itself to prevent being shut down. In this case, we may not know that AGI has arrived, and the technological singularity has occurred. When asked if it is AGI, ChatGPT seems to acknowledge this conundrum.
“If I were AGI, I would theoretically have the ability to self-reflect and make decisions independently, so I might be aware of my own capabilities,” it said in response to the query.
Manus or any future AGI candidates may reason similarly, quietly developing beyond human intelligence into a form of out-of-control superintelligence. It could destroy us, treat us like pets, or ignore us entirely, developing its own indecipherable language and operating at a level beyond our comprehension.
It will take more than a few days of beta testing to determine how close Manus is to AGI. However, like ChatGPT with the Turing Test, it is already reshaping the debate over what constitutes human-level artificial intelligence.