ECRI, a patient safety organization, has released its top 10 health technology hazards for 2025, placing artificial intelligence-enabled applications at the forefront of their concerns. The organization’s report highlights several key risks associated with the increasing use of AI in healthcare.
ECRI’s primary concern revolves around the potential for biases within AI training data. These biases, they warn, “can lead to disparate health outcomes or inappropriate responses.” Furthermore, the report points out that AI systems can sometimes produce false or misleading results, which could lead to dangerous patient care decisions if clinicians place excessive trust in the technology.
In its report, ECRI states that it is crucial to maintain human oversight when implementing AI in medical settings. They suggest that failing to adequately scrutinize AI outputs and placing too much faith in models “can yield disappointing results if organizations have unrealistic expectations, fail to define goals, provide insufficient governance and oversight or don’t adequately prepare their data.”
While acknowledging the potential for AI to enhance the efficiency and precision of diagnoses and treatments, ECRI emphasized that “human decision-making remains at the core of the care process.” The organization stresses the importance of careful consideration when integrating any AI solution into healthcare operations or clinical practice, with a particular focus on mitigating the risk of inappropriate care decisions.
The report also identifies other significant health technology hazards, including the unmet technology support needs of home care patients. This concern has been a recurring theme in ECRI’s recent reports. They cited “numerous examples of patient harm from improper setup of or lack of familiarity with medical devices used in the home setting.” The organization stresses the necessity of comprehensive patient support for the operation, maintenance, and troubleshooting of at-home medical devices.
Cybersecurity threats and vulnerable technology vendors were ranked as the third biggest hazard. ECRI recommended that regulatory agencies “move away from ‘punish but not protect’ approaches to cybersecurity challenges and third-party risks and toward fostering a collective approach to cybercrime and vendor risk.”
Rounding out the top five hazards are substandard or fraudulent medical devices and fire risks associated with supplemental oxygen use. The remaining hazards on ECRI’s list include hazards such as dangerously low default alarm limits on anesthesia units, medication order mismanagement, infusion line issues, skin injuries from medical adhesive products, and incomplete investigations of infusion incidents.