TensorWave, a cloud platform leveraging AMD’s MI Instinct accelerators for AI workloads, has launched the ‘Beyond CUDA Summit.’ The event, which began today, centers on the concept of the ‘CUDA moat,’ and how developers can optimize their AI-focused tasks using other alternatives.
Attendees can anticipate demonstrations, insightful perspectives, panel discussions, and expertise from influential leaders in the AI sector. Featured speakers include prominent computer architects such as Jim Keller and Raja Koduri.
It’s widely understood that Nvidia-built GPUs constitute the majority of hardware in the AI space. While AMD’s Instinct accelerators demonstrate performance comparable to Nvidia hardware, the established and mature CUDA ecosystem is indispensable to some users and organizations. Nvidia recognized the potential of parallel computing on its GPUs early on, developing CUDA, a proprietary platform that has become the de facto standard for GPU-accelerated computing.
Through consistent efforts, optimizations, and the sudden growth of AI — which coincidentally is powered by GPUs — Nvidia has established itself as a leading solution provider. A significant portion of Nvidia’s revenue now comes from its data-center offerings, with CUDA serving as a key selling point. This creates a vendor lock-in, where CUDA (software) effectively confines the industry to Nvidia’s hardware, limiting innovation and competition.
The industry is moving towards a more open-source and hardware-agnostic future, but this shift is easier said than done. Alternatives such as OpenCL, ROCm, oneAPI, and Vulkan exist, but each trails Nvidia in one or more aspects. This is where the Beyond CUDA Summit comes in, bringing together key figures in the AI field to collectively develop a more diverse and heterogeneous future. Hosted by TensorWave, the Beyond CUDA Summit will address the many challenges the AI computing industry faces, including hardware flexibility, cost efficiency, and exploring the available alternatives to CUDA.
Platforms like ROCm need significant advancements to achieve parity with CUDA. Currently, ROCm only supports a limited selection of modern GPUs while CUDA maintains compatibility with hardware dating back to 2006. AMD’s most recent RDNA 4 GPUs are still not officially supported by ROCm. Developers have long expressed concerns about AMD’s slow adoption of new features and support for new hardware. On a positive note, Strix Halo is now ROCm-compatible, although only on Windows.
The summit is taking place at The Guildhouse in San Jose. Ironically, it is just three blocks from the McEnery Convention Center, where Nvidia’s GTC is also commencing today. Participants have the opportunity to win an AMD Instinct MI210 GPU with 64GB of HBM2e memory. The event runs from 12 PM to 10 PM PDT, with four time slots dedicated to various sessions.