The explosive growth of AI applications is transforming data center, edge, and cloud networks. As companies deploy more AI solutions, the technology’s rapid expansion at both the edge and in data centers is creating unprecedented demands for bandwidth, latency, and architectural flexibility that traditional networks were not designed to handle.
Recent data from Omdia reveals that AI traffic, including both new AI applications and AI-enhanced applications, accounted for 39 exabytes of total network traffic in 2024. Non-AI traffic from AI-enhanced applications totaled 131 exabytes, while conventional application traffic reached 308 exabytes, according to Brian Washburn, Omdia research director. Omdia predicts that AI traffic will double to 79 exabytes in 2025 and continue growing at a rate far surpassing conventional traffic. By 2031, AI traffic is expected to overtake conventional traffic.
AI Networking Challenges
AI poses significant networking challenges, particularly in data centers where model training generates substantial traffic between GPUs and servers. “The demand for massive resources, especially CPU and GPU, is driving a new zone within enterprise data centers dedicated to AI,” says Lori MacVittie, distinguished engineer at F5 Networks. These AI-focused environments require smarter networking, enhanced security capabilities, and the ability to handle higher data volumes.
Data Center Implications
The growth of AI is driving a renaissance in the data center Ethernet switch market. IDC forecasts that the generative AI data center Ethernet switch market will grow from $640 million in 2023 to over $9 billion in 2028. Enterprises are also experimenting with agentic AI, where AI-powered agents collaborate on complex tasks. This type of AI often operates on-premises or in private clouds to reduce costs and latency while maintaining data security.
Cloud and Edge Considerations
Once AI models are deployed, traffic flows between models and end-users, requiring strong wide-area and multi-site connectivity. Jason Carolan, chief innovation officer at Flexential, notes that inference requires different network topology than training, which needs dense local networks. The flexibility to adapt to changing AI workloads is crucial as network connection requirements may shift with new models, data, or endpoints.
Edge AI presents its own set of challenges, particularly regarding latency. Applications like self-driving cars, factory robots, and medical devices require processing capabilities close to data sources to reduce latency and bandwidth usage. Salesforce’s Paul Constantinides emphasizes the need for low-latency edge networks to support such applications.
Security Concerns
AI introduces new security challenges for enterprises. Rich Campagna, senior vice president at Palo Alto Networks, notes that attackers are developing techniques to exploit AI systems and components. The distributed nature of edge devices and networks creates visibility blind spots, making problem detection harder. Palo Alto is developing AI applications to address these challenges, and many customers are following suit.
Agentic AI Security
F5’s MacVittie highlights the security complexities introduced by agentic AI, particularly regarding identity, credentials, and privileges in zero-trust environments. Sanjay Kalra, product leader at Zscaler, emphasizes the need for fine-grained security as AI proliferates across internal networks. Zscaler’s data shows that 60% of AI transactions were blocked by enterprise customers, with 2.9 million attempts to upload sensitive data to ChatGPT alone being prevented.
The rise of AI-powered attacks is another significant concern. According to Bugcrowd’s hacker survey, 86% of white-hat hackers say AI has changed their approach. Keeper Security reports that 51% of IT and security leaders view AI-powered attacks as their most serious threat. In response, top security vendors are investing heavily in AI-driven security solutions.