Amazon is making a major push into the world of artificial intelligence, unveiling a range of new products and initiatives that signal the company’s aggressive ambitions in this rapidly evolving field.
The company’s announcements, which have already begun to make waves across the tech sector, include new developments in AI computer chips, plans for a powerful new supercomputer, and several generative AI models.
Amazon’s stock has seen an uptick as investors digested the news, as the company is set to affect nearly every area of the AI landscape. The company has a history of disrupting industries, and now it seems set to bring that same disruptive energy to AI.
One significant move is the launch of the Trainium2 chips, with Trainium3 on the horizon. The chips are designed for the heavy computational demands of AI, and are poised to compete with industry leaders like Nvidia and AMD in the graphics processing unit space. Alongside the new chips, Amazon also revealed plans to develop a supercomputer called Rainier to address its AI training and computational needs. The Rainier supercomputer is set to rival the likes of Elon Musk’s Cortex and Colossus AI supercomputer complexes.
In terms of software and applications, Amazon has launched six foundational large language models under its Nova umbrella. These models are designed to compete with existing offerings from companies like ChatGPT and Google’s Gemini. Interestingly, these new LLMs will be substantially discounted, with prices as much as 75% lower than current market models, and will work in over 200 languages. Some of the Nova models will be multimodal, capable of generating text, images, and video from a single AI platform.
An Amazon spokesperson noted that the company is focused on delivering its customers a range of options to meet diverse needs. “We are launching these models now because they are ready and we are excited to share them with customers. We’re building the world’s most useful AI and excited to tell the world about it,” wrote an Amazon spokesperson in an email exchange. “Training foundational models is costly, but we are fortunate to be able to be in a position to train and deploy the Nova models at scale to benefit our customers. Our goal is to provide a diverse range of options, recognizing that different customers have different needs. We’ll continue investing in both our own models and partnerships, ensuring developers have access to a variety of tools for building and deploying AI applications,” the spokesperson added.
Addressing the challenges of generative AI, Amazon has taken a proactive approach to address the limitations of the technology to combat issues like “hallucinations,” where AI models can generate incorrect information. “Amazon Science recently launched RefChecker, a new tool to detect hallucinations, and a benchmark dataset for assessing them in various contexts. AWS services like Amazon Kendra can help reduce hallucinations by augmenting LLMs to provide more accurate and verifiable information to the end user. For Amazon Q, we help customers understand how a specific answer was derived by providing citations and links to source material, so customers can make an informed assessment of its outputs,” the spokesperson explained.
Experts in the AI field see Amazon’s ambitions as a serious play for market share. Ben Torben-Nielsen, an internationally recognized AI consultant, expressed his admiration for the breadth and quality of Amazon’s Nova generative AI models. “So far, AWS has largely been a broker between its cloud infrastructure and other [LLM] providers. Now they are releasing a entire line of text, image, video and multi-modal models. Is this a Netflix moment? Does AWS think the content provider — in their case the LLM providers — have too much power and eat their margin, therefore, they start making ‘their own content’ in the form of foundational models,” he wrote.
Conor Grennan, chief AI architect at NYU Stern School of Business, believes the Trainium2 chips will have the most significant impact. “Amazon’s Trainium2 chips showing 30-40% better price performance than current GPU-based instances represents a serious challenge to Nvidia’s dominance. With both Apple and Anthropic committing to use these chips for AI training, AWS is positioning itself as a viable alternative in the AI infrastructure space. This space can use good competition,” Grennan wrote.
Ahmed Banafa, a technology expert at San Jose State University, was particularly impressed with the Rainier supercomputer, noting that this marks a shift towards deeper vertical integration. “By moving into specialized hardware, Amazon is signaling its intent to not just host the future of AI but to shape it,” he wrote. Banafa added that AWS is making it clear they intend to be an end to end AI powerhouse.
Grennan described Amazon’s strategy as a blend of major AI ambition and cautious moves. “They’re looking to be a one stop shop for AI, building out complete solutions where they can produce and own the best of all worlds and not rely on competitors. While Amazon has also invested $8 billion in Anthropic and with good reason. Claude is arguably the best model out there, and Amazon has a need for a great model — especially with Alexa in every home. Anthropic will also use AWS cloud services and Amazon chips. But we’re seeing that with Nova, Amazon is hedging against over reliance on Anthropic to be their LLM of choice,” concluded Grennan.
An Amazon spokesperson emphasized that this is just the beginning, with plans for the company to launch even more capable models in the years to come. “Amazon is in this for the long term. We believe Amazon is in the best possible position to bring the benefits of gen AI to every household and every business in the world,” the spokesperson noted. “In 2025, we will bring even more capable models while driving cost down, through algorithmic and computing innovations. And we will continue to invest in the talent, technology, and infrastructure necessary to offer world class models to our customers for years to come,” they concluded.