Enterprises embarking on large-scale artificial intelligence (AI) initiatives are increasingly stepping into the world of supercomputing, whether they label it as such or not, according to Trish Damkroger, senior vice-president and general manager of high-performance computing (HPC) and AI infrastructure solutions at Hewlett Packard Enterprise (HPE).
Speaking to Computer Weekly, Damkroger noted that the fundamental principles underpinning modern AI infrastructure – massive compute power, high-density configurations, and scale-up architectures – are direct parallels to traditional supercomputing. “It’s all aligned with supercomputing, whether you want to call it supercomputing or not,” she said, emphasizing that it’s about “dense computing and scale-up architecture.”

Damkroger pointed to burgeoning power demands as a clear indicator of this trend, citing discussions with customers about building one-gigawatt datacentres, which is becoming increasingly common. Some sectors are leveraging HPC to run AI applications, such as a quant trading fund looking to use supercomputers for high-density AI workloads requiring direct liquid cooling.
Real-World Applications of Supercomputing in AI
South Korea’s SK Telecom is using supercomputing to train large Korean-language models based on OpenAI’s GPT-3, powering AI services across their mobile network. HPE provided an integrated, high-performance architecture to support large-scale training and deployment. In Japan, Toyo Tires adopted HPE GreenLake with HPE’s Cray XD systems to accelerate simulations for tire design, achieving up to three times faster performance and cutting simulation time in half.
Growing Adoption Across Asia-Pacific
The growing adoption of AI has spearheaded interest in HPC systems across the Asia-Pacific (APAC) region. “Last year, our AI sales in APAC were second only to North America, which is not normally the case,” Damkroger noted. “There’s a lot of growth in the AI space in the region.”
To cater to diverse enterprise needs, HPE offers a flexible software strategy. This includes AI factories that let customers select open-source frameworks on top of HPE’s cluster management software, orchestrated by the Morpheus hybrid cloud management platform. For those seeking more turn-key capabilities, HPE’s Private Cloud AI is a curated offering that allows AI and IT teams to experiment with and scale AI projects.
Challenges and Future Directions
Despite advancements and growing adoption, finding truly transformative enterprise AI applications that leverage HPC remains an ongoing quest. “If you look at AI specifically in enterprises, there are some good use cases, but I don’t think we’ve found the most amazing ones yet,” Damkroger said. Key challenges include initial infrastructure investment, power requirements, and a persistent talent shortage.
For sustained and intensive HPC and AI workloads, on-premise deployments are more economical than public cloud solutions when utilization exceeds 70%. “We find that in the HPC space, if you’re going to use it a lot with greater than 70% utilisation, it’s much more cost-effective to have it on-premise,” Damkroger explained, while acknowledging the public cloud’s role in exploration and lower-demand scenarios.
Reflecting on HPE’s deep roots in HPC, including the Cray heritage, Damkroger noted: “It’s fun to be in this business right now with liquid cooling being so prominent. We’re finally seeing the results of all the work we’ve been doing for the last 50 years.”