How the Cloud Hyperscalers are Shaping the Generative AI Market
LLMs are the actors on the stage, but the cloud providers are the theaters of AI
Large language models (LLM) or foundation models, if you prefer, receive a lot of attention today. On the surface, it seems like a pitched battle between LLM providers to capture new users and use cases. That is true, to a degree. However, another way to look at the market is that the LLM wars exist in the larger context of the cloud computing wars.
Cloud Hyperscaler Generative AI Strategies
The three cloud-computing LLM ecosystems are depicted in the diagram above. There is little argument that OpenAI is the leader in LLM use and mindshare today. A recent survey by Voicebot of developers in the conversational AI industry showed that over 90% of those who had used an LLM API had at least some experience with OpenAI. The next most tried LLM was identified by less than 50% of the developers.
Azure is the only leading cloud provider that offers OpenAI today, and it is also the cloud that has the fewest third-party options today. AWS started with a strategy of featuring multiple LLMs from third-parties alongside its Tritan LLM in the Bedrock offering. It is a strategy designed to be a counterbalance to Azure’s OpenAI approach by providing more user choice.
Google began by focusing on its own in-house LLM PaLM and other foundation models. However, it has quickly added third-party offerings to match AWS’s selection. Azure is likely to add more third-party LLM offerings over time, but it is clear the first priority today is to serve the significant demand for OpenAI Services.
Meta’s Middle Road
A prominent third-party option that Azure does provide is Meta’s Llama 2. Meta has quickly become a popular consideration for an open-source foundation model. To the surprise of many, Google Cloud also recently announced the availability of Llama 2. Every cloud provider knows they need to offer open-source options and Meta is going to become a standard. The company has announced it will also be available on AWS.
While other tech giants offer proprietary models, Meta has chosen to provide Llama 2 as open-source and is not looking to generate revenue from the foundation model. Meta would instead like to see a large developer community organize around Llama 2 and ultimately provide greater interoperability and compatibility of LLM-backed features. However, Meta understands that the primary route for enterprises to access LLMs will be through their preferred cloud providers. Meta may eventually become the leading LLM choice overall as the second most popular choice in each Cloud.
LLM Distribution
Cloud providers have emerged as the key distribution channels for LLMs. What they provide determines the selection set for enterprises. LLMs that cannot secure distribution through the leading cloud hyperscalers will have difficultly driving adoption. It is not to say they cannot be successful. The path will just include more significant barriers.
The key takeaway here is that you can understand a lot about how the LLM market is developing by observing the actions of the leading cloud providers. They are deciding which LLMs will be the easiest to adopt. They are offering discounted foundation model training for partners. The cloud providers know that LLMs drive a lot of compute processing both for training and inference. They each want to maximize how much of that flows through their servers. LLMs are just the latest technology venue for them to play out cloud competition.
NVIDIA should be closer to Google because https://nvidianews.nvidia.com/news/google-cloud-and-nvidia-expand-partnership-to-advance-ai-computing-software-and-services