AI Agents in Crypto: Emerging Security Concerns
The integration of AI agents into cryptocurrency wallets, trading bots, and on-chain assistants is becoming increasingly prevalent, automating tasks and enabling real-time decision-making. At the heart of many of these AI agents is the Model Context Protocol (MCP), an emerging framework that, while not yet standard, is gaining traction. MCP acts as a control layer managing an AI agent’s behavior, determining which tools to use, code to run, and how to respond to user inputs.
The flexibility offered by MCP also presents a substantial attack surface, potentially allowing malicious plugins to override commands, poison data inputs, or trick agents into executing harmful instructions. According to VanEck, the number of AI agents in the crypto industry surpassed 10,000 by the end of 2024 and is projected to exceed 1 million in 2025.
MCP Attack Vectors Expose AI Agents’ Security Issues
Security firm SlowMist has identified four potential attack vectors that developers need to be aware of, all of which are delivered through plugins that extend the capabilities of MCP-based agents. These include:
- Data poisoning: Manipulates user behavior and creates false dependencies.
- JSON injection attack: Can lead to data leakage or command manipulation.
- Competitive function override: Overrides legitimate system functions with malicious code.
- Cross-MCP call attack: Induces AI agents to interact with unverified external services.
These attack vectors target AI agents built on top of models, acting on real-time inputs using plugins and control protocols like MCP, rather than poisoning the AI models themselves.
Securing the AI Layer
Experts stress that building security into AI systems from the outset is crucial, particularly in the high-stakes crypto environment. “When you build any plugin-based system today, especially in crypto, you have to build security first and everything else second,” said Lisa Loud, executive director of Secret Foundation.
To mitigate these risks, developers are recommended to implement strict plugin verification, enforce input sanitization, apply least privilege principles, and regularly review agent behavior. As AI agents continue to expand their presence in crypto infrastructure, proactive security measures are essential to prevent potential exploits and protect crypto wallets, funds, and data.