Microsoft Unveils New AI Assistant ‘Dragon Copilot’: How Safe are AI Tools in Healthcare?
Microsoft has launched Dragon Copilot, a new AI assistant designed to reduce the workload of doctors and other healthcare professionals. But is it a technological breakthrough, or does it come with significant risks?
Generative AI tools in healthcare, like Dragon Copilot, are designed to help with administrative tasks, improve efficiency, and counter clinician burnout. The company is capitalizing on the need for such tools and has emerged as a major player in the AI note-taking market.
“No one becomes a clinician to do paperwork, but it’s becoming a bigger and bigger administrative burden, taking time and attention away from actually treating and supporting patients. That’s why we’re introducing Microsoft Dragon Copilot, the industry’s first AI assistant for clinical workflow,” Microsoft CEO Satya Nadella said in a post on X.
What is Dragon Copilot?
Dragon Copilot is built upon existing tools like Dragon Medical One (DMO) and DAX, developed by Nuance Communications, which Microsoft acquired for $16 billion in 2021. According to Microsoft, DMO’s speech capabilities have helped clinicians transcribe “billions of patient records,” while DAX’s ambient AI technology has “assisted over 3 million ambient patient conversations across 600 healthcare organizations in the past month alone.”
The tool aims to improve efficiency within healthcare settings. Joe Petro, the corporate vice president of Microsoft Health and Life Sciences Solutions and Platforms, stated, “With the launch of our new Dragon Copilot, we are introducing the first unified voice AI experience to the market, drawing on our trusted, decades-long expertise that has consistently enhanced provider wellness and improved clinical and financial outcomes for provider organizations and the patients they serve.”
How to Use Dragon Copilot
Dragon Copilot is designed to draft memos and notes in a personalized style and format, as specified by Microsoft. Besides voice-to-text transcription, the user interface supports prompts and templates for generating notes. The AI assistant can also search for medical information from trusted sources, automating tasks like order placement, note summaries, referral letters, and after-visit summaries, all within a centralized workspace.
Microsoft has also released findings from a survey, which claimed the AI assistant helped clinicians save around five minutes in every patient interaction. The company also claimed that “Around 70% of clinicians reporting reduced feelings of burnout and fatigue, 62% of clinicians stated they are less likely to leave their organization, while 93% of patients report a better overall experience.”
Dragon Copilot will be accessible via mobile app, browser, or desktop, with direct integration with electronic health records. Dragon Copilot will become available in the US and Canada in May this year and will subsequently launch in the UK, Germany, France, the Netherlands, and other key markets. The cost of accessing Dragon Copilot has not been announced.

Rise of AI Tools in Healthcare
Tech companies and startups are aggressively introducing AI tools and hardware for the healthcare sector. Google Cloud, for example, is being used by healthcare providers to create AI agents that support medical assistants in preparing patient charts and performing administrative tasks while flagging potential health risks. Startups Abridge and Suki have raised significant funding to create similar AI scribing tools.
However, AI tools in healthcare are not without risks. Large language models (LLMs) can hallucinate or produce inaccurate information. This is particularly concerning in medical settings, where patient outcomes could be affected.
The US Food and Drug Administration (FDA) has identified model hallucinations as a challenge, noting that in products designed to summarize doctor-patient interactions, the possibility of AI-generated content can be critically dangerous. The FDA has also pointed out that foundational AI models “may be susceptible to bias that may be especially difficult for individual product developers to identify or mitigate for their resulting GenAI-enabled products.”
Microsoft asserts that healthcare-specific safeguards have been incorporated into Dragon Copilot. The company says the tool has been developed in line with Microsoft’s responsible AI principles but has not provided specific details on how the AI tool addresses the risks of hallucination and performance bias. Therefore, the ultimate risks of AI tools are still unknown and there is certainly cause for concern.