Behind the scenes of generative AI breakthroughs and GPU hype, a quieter transformation is taking place in data center architecture. As AI models expand in size and demand greater compute power, the choice of network fabric has become crucial for AI’s performance, scalability, and cost. Broadcom, known for its dominance in networking and semiconductors, is emerging as a key player in AI’s infrastructure revolution.
“There’s a shift happening in the market,” says Ram Velaga, senior vice president and general manager of Broadcom’s Core Switching Group. “AI is not just about GPUs or compute anymore. It’s about how data moves, power is managed, and how systems scale.”
Broadcom’s Strategic Evolution
Broadcom began as a semiconductor company in 1991, focusing on wireless and broadband communication. A major turning point came in 2015 when Avago Technologies acquired Broadcom for $37 billion, transforming it into a global semiconductor and infrastructure technology leader. Through strategic acquisitions like ServerWorks in 2001 and VMware in 2023, Broadcom expanded its reach in the data center space.
Ethernet: The Backbone of AI Infrastructure
The company’s reputation is driven by high-speed Ethernet chips like the Tomahawk series, crucial for high-bandwidth networking within data architectures. Rather than chasing headlines with flashy AI demonstrations, Broadcom is focusing on building the infrastructure for AI developers to scale the technology. Velaga and his team are helping tech giants and hyperscalers rethink their data center architecture through deeply integrated systems—codesigned chips, custom interconnects, and a commitment to Ethernet.
Rethinking Data Center Architecture
Broadcom’s approach involves creating integrated systems that optimize data movement, power management, and system scaling. This strategy is particularly important as AI models continue to grow in size and complexity, demanding more sophisticated data center infrastructure. By focusing on Ethernet technology, Broadcom is positioning itself at the forefront of AI’s infrastructure revolution, supporting the development of more efficient and scalable AI systems.