HIMSS Survey Highlights AI Cybersecurity Vulnerabilities
Recent findings from a Healthcare Information and Management Systems Society (HIMSS) survey highlight significant cybersecurity risks associated with the increasing use of artificial intelligence (AI) in healthcare, particularly concerning insider threats. The survey, which included responses from 273 healthcare cybersecurity professionals, reveals a mixed landscape of AI adoption and security preparedness.

AI Usage and Organizational Policies
The survey found that nearly one-third of healthcare organizations allow their employees to use AI without formal restrictions, while only 16% outright prohibit it. “The findings underscore the need for robust safeguards, ethical frameworks and proactive measures to address the risks,” the report noted, “. This widespread acceptance of AI, often without sufficient oversight, creates potential vulnerabilities.
Key findings related to AI usage include:
- AI Use Cases: 37% of respondents reported using AI for technical tasks like support and data analytics, 35% for clinical services such as diagnostics, and 34% for both cybersecurity and administrative tasks.
- AI Guardrails: While nearly half of the organizations surveyed have approval processes for AI technologies, 42% do not, and 11% are unsure. HIMSS commented that “An approval process serves as a proactive guardrail by vetting AI technologies before adoption, reducing the likelihood of unauthorized or inappropriate use.”
- Active Monitoring: Only 31% actively monitor AI usage across systems and devices, with 52% not monitoring and 17% unsure. HIMSS pointed out “The lack of monitoring poses risks such as data breaches and others. There is a need for robust monitoring strategies to ensure safe and responsible use of AI technologies.”
- Acceptable Use Policies (AUPs): 42% of organizations have written AUPs for AI, while 48% do not, and 10% are unaware. HIMSS noted that “An acceptable use policy sets clear guidelines for the safe and responsible use of technology, including AI, and can be standalone or integrated into a general policy based on the organization’s AI adoption.”
Cybersecurity Concerns
Survey participants expressed significant concerns about the cybersecurity implications of AI:
- Data Privacy: 75% of respondents identified data privacy as their top concern.
- Data Breaches and Bias: Data breaches and bias in AI systems were cited as a top concern by 53% of respondents
- Intellectual Property Theft and Transparency: Nearly half of respondents expressed concerns about intellectual property theft (47%) and lack of transparency (47%).
- Patient Safety: Patient safety risks were highlighted by 41%.
Insider Threats and AI
The survey also addressed the threat of insider activity, revealing that negligent insider threat activity was reported by 5% of respondents, malicious insider threat activity by 3%, and a combination of both by 3%. Despite these seemingly low numbers, HIMSS notes that many organizations may not have specific monitoring in place for AI-driven insider threats, leaving potential risks undetected. “The growing reliance on AI tools and systems introduces new opportunities for both negligent and malicious insider activity, which can amplify risks to sensitive data and operational integrity,” the authors noted.
The survey did not specify the geographic regions included, but HIMSS operates in North America, Europe, the U.K., the Middle East, and the Asia-Pacific region.