AI Experts Question Current AI Trajectory
Artificial intelligence (AI) researchers are expressing significant doubts about whether current advancements will lead to human-level reasoning capabilities, according to a recent survey.

The study, involving hundreds of professionals in the field, suggests that the approaches driving the current AI boom may be insufficient for achieving artificial general intelligence (AGI)—the ability of a machine to perform any intellectual task that a human being can.
Doubts About Current AI Methods
Over three-quarters of the survey respondents believe that simply expanding current AI systems, a strategy that has greatly enhanced their performance in recent years, is unlikely to lead to AGI. An even greater percentage expressed skepticism about the capacity of neural networks—the fundamental technology behind popular systems like image generators and AI chatbots—to independently match or surpass human intelligence. This technology learns from vast datasets in a manner inspired by the human brain.
“I don’t know if reaching human-level intelligence is the right goal,” said Francesca Rossi, an AI researcher at IBM and former president of the Association for the Advancement of Artificial Intelligence (AAAI). “AI should support human growth, learning and improvement, not replace us.”
These findings were unveiled at the AAAI’s annual meeting in Philadelphia, Pennsylvania.
The Push for New Approaches
The report emphasizes that there are many kinds of AI beyond neural networks that deserve to be researched, and calls for more active support of these techniques. These approaches include symbolic AI, sometimes called ‘good old-fashioned AI’, which codes logical rules into an AI system rather than emphasizing statistical analysis of reams of training data. More than 60% of respondents felt that human-level reasoning will be reached only by incorporating a large dose of symbolic AI into neural-network-based systems.
Concerns About AGI Development
The survey also highlighted concerns about the uncontrolled development of AGI. Over 75% of respondents said that prioritizing AI systems with an acceptable risk-benefit profile should be a higher priority than achieving AGI. Roughly 30% supported a temporary halt to AGI research and development until robust control mechanisms are established to ensure safety and benefit to humanity.
However, the survey revealed that most community members oppose such a halt.
“I don’t think it’s practical to do this ― companies will do this even if research agencies stopped funding it,” says Anthony Cohn, an AI researcher at the University of Leeds, UK, and an AAAI member. “I don’t think AGI is as imminent as many think,” giving researchers time to work on safety measures.