Cisco Introduces Open-Source AI Model for Cybersecurity
Cisco revealed its latest advancements in cybersecurity at the 2025 RSA Conference, including the release of an open-source generative artificial intelligence (AI) reasoning model designed to automate cybersecurity analytics and workflows. This move aims to bridge the gap between large language models (LLMs) and specific cybersecurity use cases.
Key Features of Cisco’s AI Model
The open-source reasoning model is based on the Llama AI model created by Meta and has been pre-trained using 5 billion tokens distilled from 20 billion+ tokens. It relies on 8 billion parameters and is designed to be compatible with any LLM chosen by cybersecurity teams. This approach addresses the limitations of existing closed-source LLMs, which are not specifically designed for cybersecurity issues and are often difficult to customize and expensive to use.
Integration and Collaboration
Cisco is also enhancing its cybersecurity capabilities through several integrations and collaborations:
- Splunk Integration: Further integrating the Splunk platform with the Cisco Extended Detection and Response (XDR) platform to apply generative AI to cyberattack forensics.
- Threat Intelligence Sharing: Simplifying the sharing of threat intelligence between Splunk Enterprise Security (ES) and Cisco Security Orchestration, Automation and Response (SOAR) platforms.
- Endpoint Investigation: Adding generative AI tools to investigate endpoint issues and introducing visualization tools.
- IoT Security: Enhancing integration between Cisco Cyber Vision and the Cisco Industrial Threat Defense platform for IoT environments.
- ServiceNow Alliance: Forming a formal alliance with ServiceNow to integrate Cisco’s AI Defense platform with ServiceNow’s security operations (SecOps) capabilities.
Implications for Cybersecurity
The introduction of this open-source AI model and other initiatives by Cisco signifies a rapid reduction in the cost of applying generative AI to cybersecurity. As cybersecurity vendors adopt this model, the cost of reasoning models approaches zero, and the number of tokens needed continues to decline. The remaining challenge lies in customizing these AI models to meet the unique requirements of specific IT environments.
