Apple’s Research Challenges AI Reasoning Capabilities
Apple has released a study questioning the intelligence of current AI reasoning models, claiming that they primarily memorize patterns rather than truly reason. The research, titled ‘The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity,’ tested models such as Claude, DeepSeek-R1, and o3-mini.
Key Findings of the Study
The study found that these AI models excel at pattern recognition but struggle when faced with increased complexity. When presented with puzzle games like the Tower of Hanoi and River Crossing, the models performed well initially but ‘completely collapsed’ as complexity increased. The research identified three performance regimes:
- Low complexity: Standard models performed adequately
- Medium complexity: Advanced reasoning models showed some advantage
- High complexity: All models failed completely
Implications for Artificial General Intelligence
Apple’s findings suggest that current AI models are not truly reasoning but rather relying on memorization. When provided with more computing power and clear instructions, the models failed to improve beyond certain complexity thresholds. This challenges the notion of achieving human-level AI, or Artificial General Intelligence (AGI), by 2030 as some have predicted.

The study’s results indicate that while AI models can process information and recognize patterns effectively, they lack genuine reasoning capabilities. This limitation becomes particularly apparent when dealing with complex problems that require deeper planning and reasoning.
As the tech industry continues to develop and refine AI technologies, Apple’s research serves as a critical reminder of the current limitations of AI reasoning models. The study’s findings have significant implications for the future development and expectations surrounding AGI and sentient technology.