How to Bring AI Into Cybersecurity Responsibly
As artificial intelligence (AI) becomes increasingly integrated into cybersecurity tools, Managed Security Service Providers (MSSPs) are faced with both significant opportunities and potential risks. Adopting AI in the Security Operations Center (SOC) can revolutionize security practices, but it requires a careful and responsible approach.

In a recent episode of the “Let’s SOC About It” podcast, hosted by D3 Security, expert Anthony Green, a member of the AI Ethics Advisory Panel for the Digital Governance Council and former president of the ISACA Vancouver chapter, discussed these key considerations for MSSPs in the AI era. The discussion highlighted several critical points for implementing AI security tools.
The Evolution of AI in Cybersecurity
AI has been a component of cybersecurity for over a decade, and continues to evolve. Green emphasizes that organizations must treat AI implementation with the same level of scrutiny applied to any critical security infrastructure. This rigorous approach involves comprehensive vendor evaluation, examining everything from data processing locations to the methodologies used for training the AI systems.
Risk Assessment and Cloud Security Principles
Green suggests utilizing cloud security principles when implementing AI, adding specific considerations for ethics and bias. Organizations must apply the same level of scrutiny they would to a cloud service provider, including examining encryption standards, access controls, and vulnerability management practices.
AI Bias and Ethical Considerations
A key focus of the discussion was the critical exploration of AI bias, highlighting real-world examples of problematic cases from major tech companies. Green introduced the Digital Governance Council’s 40-question framework, based on the EU AI Privacy Act and OWASP Top 10 guidelines, to ensure AI systems are implemented ethically and without bias.
Organizational AI Ethics Structure
The podcast also provided a breakdown of how companies should structure AI governance, using application security models as a comparison. The need for collaboration between security teams, privacy teams, and end-users was emphasized.
Data Protection Strategies
The podcast highlighted Microsoft’s strategies for AI implementation, which are intended to ensure proper data access controls and prevent unauthorized information exposure. It emphasizes that when dealing with AI systems that make automated decisions affecting security operations, accuracy and reliability are paramount.
By implementing AI responsibly, MSSPs and other organizations can harness the power of AI while mitigating its risks. This requires careful planning, thorough vendor evaluation, and a commitment to ethical considerations.