The UK’s National Cyber Security Centre (NCSC) is warning organizations about the dangers of deploying AI systems without proper security controls. At the recent CYBERUK 2025 conference, industry experts revealed that while many organizations are rushing to adopt AI technology, few have a firm grasp on the security risks involved.
Peter Garraghan, CEO of Mindgard and professor of distributed systems at Lancaster University, conducted an informal poll during a conference session. When he asked how many attendees had banned generative AI in their organizations, only three hands went up among the 200-strong crowd of security professionals. More alarmingly, when he asked how many had a good understanding of the security risks associated with AI system controls, not a single hand was raised.
“So everyone’s using generative AI, but no one has a grasp of how secure it is in the system,” Garraghan observed. “The cat’s out of the bag.” This haphazard approach to AI deployment is precisely what the NCSC is trying to prevent, as it significantly increases the attack surface, particularly for organizations in critical supply chains.
The NCSC released a report during CYBERUK 2025, launched by senior minister Pat McFadden, warning that organizations failing to integrate AI into their cyber defenses risk becoming vulnerable to a new generation of cybercriminals. The report noted that AI-empowered attackers are likely to further reduce the time-to-exploitation of vulnerabilities, which has already decreased to just days in recent years.
“Organizations and systems that do not keep pace with AI-enabled threats risk becoming points of further fragility within supply chains,” an NCSC spokesperson warned. “This will intensify the overall threat to the UK’s digital infrastructure and supply chains across the economy.”
The NCSC’s concerns are backed by recent pentests conducted by Garraghan’s company. During one test involving a candle shop’s AI chatbot, Mindgard was able to identify significant security vulnerabilities due to insecure deployment. Potential risks included prompt engineering leading to a reverse shell on the application, extraction of system data, and the possibility of engineering the chatbot to provide dangerous instructions or divulge sensitive business information.
The NCSC report highlighted several key risks associated with insecure AI deployments, including insecure data handling processes and configurations that could result in data interception, credential theft, or targeted attacks on user data. To mitigate these risks, the NCSC is emphasizing the importance of applying cybersecurity fundamentals when deploying AI systems.
As AI models become increasingly entrenched in organizations’ systems, data, and operational technology, the potential attack surface expands. Common attacks associated with AI, such as direct and indirect prompt injections, software vulnerabilities, and supply chain attacks, can facilitate wider access to enterprise environments if not properly controlled.
The NCSC plans to publish guidance and advice throughout the year to help UK organizations improve their cyber resilience against AI-assisted cyberattacks. In the meantime, the agency is urging organizations to establish a strong baseline of cybersecurity defenses and for large technology companies to take responsibility for improving the resilience of their supply chains.