With the rapid popularization of artificial intelligence (AI), the competitive landscape of the cloud industry is undergoing significant changes. American technology giants are investing huge sums of money in building data centers and regard AI computing infrastructure as the main battlefield in the future. Against this background, Amazon Web Services (AWS), one of the world's largest cloud service providers, is facing new challenges and opportunities.
Recently, AWS senior executives shared the company’s strategy in the field of AI semiconductors and discussed how to maintain its market leadership. The director pointed out that NVIDIA's (NVIDIA) market share in AI semiconductors has reached an extremely high level in the current market, which is a big challenge for AWS.
To cope with this challenge, AWS is actively strengthening its AI capabilities and semiconductor strategy. In November, AWS and OpenAI announced a seven-year strategic cooperation with a total amount of US$38 billion (approximately 5.7 trillion yen). AWS will provide NVIDIA Hopper and Blackwell series GPUs to support OpenAI's large-scale language model training and inference. This cooperation not only strengthens AWS's layout in the AI infrastructure market, but also demonstrates its advantages in GPU deployment capabilities.
In addition, AWS continues to invest in self-developed AI chips, with Trainium and Inferentia as its main products, aiming to compete with NVIDIA GPUs. However, according to internal documents and feedback from multiple companies, Trainium 1 and 2 still lag behind NVIDIA H100 in performance, and the service stability and accessibility of Trainium 2 still need to be improved.
AWS is also developing an AI supercomputer called "Ultracluster", which is expected to be completed this year and will be equipped with self-developed Trainium chips. There is also a "Project Ceiba" plan that will integrate more than 20,000 NVIDIA Blackwell GPUs to further enhance AI computing capabilities. At the same time, AWS launched Ultraserver equipped with Trainium chips, which is specially designed for AI model training, and provides OpenAI models to its own cloud customers through the Bedrock platform to expand the commercial applications of generative AI.
In terms of market competition, NVIDIA still dominates, but AWS, Google, AMD and other companies are actively developing self-developed chips and cooperation projects in an attempt to break the monopoly. Google has also signed a multi-billion-dollar contract with Anthropic to provide TPU to support AI model training; AMD has cooperated with OpenAI to build a 6-gigawatt-scale AI infrastructure and has potential equity cooperation.
AWS Japan also emphasizes the application of generative AI in the two major directions of "business innovation" and "productivity improvement", and promotes an ecosystem of enterprise partners to help customers introduce generative AI technology.
With the further development of AI technology, how AWS’s future strategy will evolve deserves continued attention from the industry.
AI's main battle field Amazon's Throne Defense Battle Strategy AWS's self-made AI チップはエヌビディア-made GPUに "bad" powerful customer "Cool review" collection and company documents are available Further reading: Intel stock price falls again as OpenAI compensates for not buying Intel chips