Microsoft Brings Enhanced AI to Copilot+ PCs with Distilled Deepseek R1 Models
Microsoft is boosting its Copilot+ PCs with the introduction of distilled Deepseek R1 models. These more compact and efficient AI models will be accessible via Azure AI Foundry, Microsoft’s platform designed for AI application development and deployment.
)
What Are Distilled Models?
Previously, Copilot+ PCs featured the powerful DeepSeek R1 model. While effective, this larger model demanded significant computing power, potentially impacting performance on everyday devices. The distilled versions of DeepSeek R1 address this by retaining the core intelligence of the original model while being optimized for faster and more efficient performance on standard hardware.
In essence, the original DeepSeek R1 model acts like a teacher, training smaller, “student” AI models. These students learn the same concepts but excel at specific tasks with greater efficiency.
These distilled models enable AI-powered features to run directly on the user’s PC. This local processing eliminates reliance on cloud services, leading to quicker response times and greater accessibility for users.
Impact on Developers and Users
According to Microsoft, these distilled DeepSeek R1 models offer significant benefits for both developers and everyday users.
For Developers: The ability to run AI models locally empowers developers to create software that delivers faster, real-time responses. This means smarter virtual assistants, more responsive speech recognition tools, and automated systems that function instantly, without the delays associated with cloud processing.
For Everyday Users: AI-powered tools will operate more quickly and reliably. Tasks such as composing emails, summarizing documents, and managing schedules will become significantly more efficient. Local AI processing also improves multi-tasking, enhances battery life, and provides better privacy by keeping sensitive information on the device itself.
How Copilot+ PCs Handle Local AI Processing
The Neural Processing Unit (NPU) is the key technology that makes on-device AI processing possible. Unlike traditional CPUs (which handle sequential processing) and GPUs (for graphics), NPUs are specifically engineered for AI workloads.
NPUs excel at processing AI tasks while consuming minimal power, which allows complex AI models to run without noticeably slowing down the PC or draining the battery. They also minimize heat generation, ensuring the AI features work without compromising system performance. Because AI tasks are offloaded to the NPU, the CPU and GPU are freed up to handle other tasks, resulting in greater overall efficiency.
Availability
The DeepSeek R1 distilled models will initially be available on Copilot+ PCs powered by Qualcomm Snapdragon X. Support is planned to extend to Intel Core Ultra 200V and AMD Ryzen processors in the future.