Microsoft has unveiled a series of new “open” AI models, with the most capable one competing with OpenAI’s o3-mini on at least one benchmark. The newly released models, Phi 4 mini reasoning, Phi 4 reasoning, and Phi 4 reasoning plus, are designed as “reasoning” models, enabling them to spend more time fact-checking solutions to complex problems.
These models expand Microsoft’s Phi “small model” family, which was launched a year ago to provide a foundation for AI developers building edge applications. Phi 4 mini reasoning, with approximately 3.8 billion parameters, was trained on around 1 million synthetic math problems generated by Chinese AI startup DeepSeek’s R1 reasoning model. Microsoft suggests it’s suitable for educational applications, such as “embedded tutoring” on lightweight devices.
Key Features of the New Models
- Phi 4 mini reasoning: 3.8 billion parameters, trained on synthetic math problems, designed for educational applications
- Phi 4 reasoning: 14-billion-parameter model, trained using high-quality web data and curated demonstrations from OpenAI’s o3-mini, best for math, science, and coding applications
- Phi 4 reasoning plus: Adapted from Microsoft’s previously released Phi 4 model to achieve better accuracy for specific tasks, approaching the performance levels of DeepSeek R1
Microsoft claims that Phi 4 reasoning plus matches o3-mini on OmniMath, a math skills test, despite having significantly fewer parameters (14 billion vs. 671 billion). The new models, along with their detailed technical reports, are available on the AI development platform Hugging Face.
According to Microsoft, these models balance size and performance by using distillation, reinforcement learning, and high-quality data. They are small enough for low-latency environments while maintaining strong reasoning capabilities that rival much larger models. This allows even resource-limited devices to perform complex reasoning tasks efficiently.