Google’s second-generation AI mathematics system, AlphaGeometry2, has demonstrated remarkable advancements, now exceeding the average performance of gold medalists at the International Mathematical Olympiad (IMO).
The IMO is renowned for its challenging geometry problems, requiring a profound understanding of mathematical concepts. Google claims that the system can solve 84% of the geometry problems posed in the IMO, compared to the average gold medalist, who solves 81.8% of these problems.

Developed by DeepMind, AlphaGeometry initially performed at the silver medalist level when first unveiled in January of last year. The latest improvements include:
- An expanded language to address more complex problems, including those with moving objects or incorporating linear equations for angles, ratios, and distances.
- Enhancements to its search process for a greatly improved language model, using Google’s state-of-the-art Gemini AI tool. The system is now able to model language better and reason by moving geometric objects around in a plane.
The expanded language increased the coverage rate of geometry problems from the years 2000-2024 from 66% to 88%.
Despite these achievements, Google acknowledges areas needing improvement, such as addressing variable points, non-linear equations, and problems involving inequalities to achieve complete geometry problem-solving automation.
DeepMind researchers aim to fully automate geometry problem-solving, eliminate errors, and increase the speed and reliability of the system.