Sonatype Brings Supply Chain Security to Open Source AI
Sonatype is extending its expertise in securing software supply chains to the rapidly evolving world of artificial intelligence and machine learning models. The company announced new capabilities designed to help organizations and Managed Security Service Providers (MSSPs) manage and secure AI models in much the same way they currently handle open source software.
Sonatype’s new AI Software Composition Analysis (SCA) solution offers a range of features, including proactive AI threat detection to block malicious AI models from development environments. It also provides governance for storing and managing models within DevOps workflows, automated AI policy management, and AI observability and compliance features.

An AI assistant dashboard.
The launch of AI SCA comes at a time of significant growth in enterprise adoption of AI, agentic AI, and open source AI. According to Mitchell Johnson, Sonatype’s Chief Product Development Officer, over 300,000 open source AI and machine learning models appeared in customer supply chains in the past year.
“The promise is clear: Open source AI enables faster innovation and reduces barriers to advanced capabilities,” Johnson said. “However, without the right controls, it also creates hidden costs. Teams frequently adopt redundant or conflicting AI models, leading to inefficiencies, higher cloud costs, and integration headaches.”
AI: Open Source Software, But More Complex
The security and governance risks associated with open source AI mirror those found in traditional open source software, but the added complexities of AI create an additional layer of challenges. Johnson emphasized that organizations that fail to address these issues proactively will likely face escalating costs and unmanageable technical debt.
The rise of open source AI parallels the broader adoption wave of AI in general. Open source AI offers similar advantages to open source software, including decreased costs, greater opportunities for collaboration, faster innovation cycles, and increased transparency and accountability. However, similar concerns arise, covering aspects from security and compatibility to variations in quality and the potential for misuse.
Various organizations, such as the Open Source Initiative, are working to bring structure to the landscape of open source AI. In October 2024, the industry organization published its initial definition. This definition addresses crucial aspects of data usage within open source AI environments and establishes requirements. Builders of open AI technology must share the data, the model’s parameters, and the source code used to train and run the systems.
Adoption Gaining Momentum Throughout Industries
Global consultancy McKinsey and Co., in collaboration with the Mozilla Foundation and the Patrick J. McGovern Foundation, conducted a survey of technology leaders and senior developers. The survey revealed that more than half of the respondents were using open source AI technologies somewhere within their AI stacks, often alongside proprietary tools from companies like OpenAI, Google, and Anthropic. Those organizations that place a high priority on AI usage are more inclined to embrace open source technologies.
McKinsey observed that “interest in open source AI is growing as the performance of more open foundation models closes the gap to proprietary AI platforms,” pointing to open models like Meta’s Llama and Google’s Gemma, along with newer models such as DeepSeek-R1 and Alibaba’s Owen 2.5-Max.
MSSPs: A Critical Component in AI Security
Brian Fox, co-founder and CTO at Sonatype, stated, “It has never been easier for organizations to integrate open source AI models into software, but with open source AI consumption comes the same risk facing users of traditional open source. It is imperative that we, as an industry, secure their use now in order to prevent unmanageable security workloads in the future.”
Sonatype’s Mitchell suggests that in this quickly changing landscape, MSSPs, whose market is already expanding due to rising cyberthreats, will be important for similar reasons. With the introduction of Sonatype’s new AI SCA, MSSPs can now provide a best-in-class solution that not only mitigates security risks but also simplifies AI model selection, reducing redundancy and managing costs.
Sonatype’s partner program includes MSSPs, along with DevOps and security providers. “Their customers gain real-time visibility into AI usage, automated policy enforcement, and proactive threat detection, helping them maintain security while keeping AI adoption efficient and cost-effective.”