Amazon Completes $4B Anthropic Investment, Links Up with Accenture
One investment that supports three strategic imperatives
Amazon committed up to a $4 billion investment in Anthropic in September 2023. That announcement coincided with the first tranche of $1.25 billion. Amazon announced this week that it had added $2.75 billion to complete the $4 billion investment option.
CNBC reported that both investment tranches were based on a valuation of $18.4 billion. Amazon’s Swami Sivasubramanian, Vice President of Data and AI at AWS, mentioned the “notable history” of AWS working with Anthropic (a history calculated in months) as an important factor in the continued commitment. More notably, Sivasubramanian commented on Anthropic’s use of Trainium chips in training the foundation model developer’s latest models.
The announcement follows Anthropic’s recent release of the Claude 3 LLMs named Haiku, Sonnet, and Opus. All three show impressive benchmark performance and credible claims that Anthropic is now on par (or maybe ahead of) OpenAI GPT-3.5 and GPT-4 model performance.
One Strategy with Three Benefits
Anthropic has been important to AWS for three reasons that we outlined originally back in September. First, Microsoft Azure has OpenAI’s GPT-3.5 and GPT-4, the most widely used large language models (LLM). Microsoft now has access to the Inflection AI 2.5 models and its technology leadership and engineering teams, offering another prominent, in-house LLM.
Google Cloud has the Gemini model family, and it claims that the Pro and Ultra models rival OpenAI’s top models. Anthropic is not exclusive to AWS as it is also available in Google Cloud through Vertex AI. However, Anthropic does provide AWS with a leading frontier LLM to compete with its cloud rivals. It also provides Anthropic with a cloud hyperscaler home base where it is the leading frontier model on offer.
Second, AWS would like its customers to become comfortable using its Trainium and Inferentia AI chips as alternatives to NVIDIA for training and inference jobs, respectively. Anthropic’s use of this infrastructure provides it with added credibility. Although Anthropic has a very small market share compared to OpenAI, the company is still widely regarded as among the best proprietary foundation models available. In fact, it is currently among the top two or three leading alternatives to OpenAI.
AWS has its own in-house model called Titan. While it is unclear how Titan compares to other leading models, it is not generally considered as advanced as Anthropic. This is important because the cloud hyperscalers all see AI workloads as the key driver of incremental computation spending over the next decade. Amazon’s Sivasubramanian commented in an earlier post:
I believe AI and ML are the most transformational technologies of our time…The ability to customize a pre-trained FM for any task with just a small amount of labeled data─that’s what is so revolutionary about generative AI. It’s also why I believe the biggest opportunity ahead of generative AI isn’t with consumers, but in transforming every aspect of how companies and organizations operate and how they deliver for their customers.
Third, the $4 billion investment may be a minority stake without a board seat, but it does help ensure Amazon will have long-term access to Anthropic models. It may also make it harder for Google to step in with an outright acquisition that could deprive AWS of Anthropic access.
The Three A’s Alliance
Anthropic also announced a new collaboration with AWS and Accenture, which will further cement its integration into the AWS business for leading global enterprises.
Today we announced a collaboration with Amazon Web Services (AWS) and Accenture. All three organizations are providing key resources to take generative AI ideas from concept to production, especially those in regulated sectors where accuracy, reliability and data security are paramount. Enterprises will be able to deploy models to address their specific needs, while keeping their data private and secure.
Over 1,400 Accenture engineers will be trained as specialists in using Anthropic’s models on AWS, allowing them to provide customers with end-to-end support that accelerates their AI strategies from concept to production. Accenture’s engineers will help organizations use their own data to fine-tune Anthropic’s models on AWS to enhance performance for their use case and industry. Teams across Accenture and AWS will also guide customers on prompt and platform engineering to help them deploy AI models through Amazon Bedrock and Amazon SageMaker.
This arrangement is designed to address the most recent bottleneck preventing faster enterprise adoption of generative AI. There is more demand for implementation than skilled resources to scale generative AI solutions to production.
Note that Accenture is not assigning 1,400 engineers to work on Anthropic implementations. It is training them. Investment is required to fill this market gap. However, the mere availability of engineers trained in the technology will make it easier for Anthropic’s customers to move from concept to production. That faster shift to production will accelerate AWS's computing revenue from generative AI inference.
There is a strong incentive alignment behind this agreement. That is assuming a large number of enterprise customers will embrace Anthropic as their LLM, a scenario that seems likely.
The Big 3
The industry is starting to solidify around the big 3 LLM proprietary brands: OpenAI, Anthropic, and Gemini. A contingent of open-source models is also vying for leadership ranging from Meta and Mistral to Databricks. However, the market momentum is primarily around OpenAI, secondarily around Anthropic and Google, followed by everything else. This is not good news for Cohere, AI21, Aleph Alpha, and other proprietary model makers. It is another indicator that each will likely need to follow a path to specialization.
Amazon’s investment in Anthropic is both an opportunity and a necessity. Otherwise, it would risk AWS falling too far behind Azure and not keeping pace with Google Cloud. AWS’ “all of the above” foundation model approach requires access to at least one leading LLM, making Anthropic the anchor tenant of Amazon Bedrock.
I was not aware that Accenture will train 1400 engineers on using Anthropic models on AWS. That sounds huge!