Will OpenAI Face Economic Headwinds as Azure Takes on New Customers?
Exploring lower-cost alternatives is also a trend, but OpenAI's position is strong
The Information published an article outlining some potential financial difficulties facing OpenAI. At the center of this thesis are two issues.
Several software providers that originally introduced generative AI-enabled features using OpenAI technology have migrated to alternative large language models (LLM).
Microsoft’s Azure OpenAI Service enables enterprises to employ OpenAI technology, but with much of that revenue flowing to Azure, OpenAI receives less revenue from those users.
While the general concepts make sense, the situation may not be as dire as The Information is suggesting. First, let’s consider what was reported. According to the article (N.B. paywall):
OpenAI is no longer the only game in town when it comes to selling generative artificial intelligence. That’s beginning to affect the growth of its sales to corporate customers.
Less than a year after OpenAI launched ChatGPT and built a considerable consumer business, several big companies that were also early customers of its AI, such as Salesforce and Wix, say they are using less expensive alternatives. Some of those firms are paying for similar AI from competing providers that claim they can help the firms use AI more cheaply. Other customers are beginning to buy OpenAI’s software through Microsoft because they can bundle the purchase with other products. That’s a problem for OpenAI, as Microsoft keeps much of the OpenAI-related revenue it generates.
The Shift to Cost Reduction
OpenAI is definitely not the only game in the generative AI town. Amazon and Google are both betting that Anthropic will be a formidable competitor. They also hope their own LLMs, Titan and PaLM, respectively, will be popular alternatives. And Amazon AWS, Google Cloud, and Microsoft Azure are backstopping the potential shift to open-source models by promoting their Meta Llama 2 offerings.
However, there is a difference between having choices and reducing costs. OpenAI is an obvious choice for any company that is trying out LLM-enabled features. It is not necessarily the cheapest option. As many companies scale their solutions, cost becomes a comparable priority to ease of use or quality. This is a key reason behind Salesforce’s shift to internal LLMs. According to The Information:
Salesforce still uses OpenAI but is trying to power more of its AI services with open-source models as well as those it has developed in-house, both of which can be less expensive, said Salesforce’s senior vice president of AI, Jayesh Govindarajan.
The article also points out that Wix is testing Google’s Vertex AI LLM model options in an effort to contain costs and even Microsoft may be considering alternatives.
Earlier this year, “a lot of this stuff was only available from OpenAI,” Brosh said. “Now things are changing so fast, and our goal is to [be able] to use any large language model, not necessarily just OpenAI,” he said.
…
And larger companies like Microsoft are experimenting with swapping in open-source models to run products that previously relied on OpenAI’s models, to reduce costs.
At the same time, The Information seemingly confirms just about every company starts with OpenAI. That is incredibly valuable market power. Many of those will stay with OpenAI, at least for some of their workloads, once they invest in building out and optimizing a solution for the GPT family of LLMs.
Open-source models are a compelling alternative because they promise much lower cost. The Information provides an example of this referencing Summarize.tech which which switched from GPT-3.5 to Mistral-7B-Instruct and cut its costs from $2,000 to $1,000 per month. Aside from the fact that this is a small revenue customer, it is not a comparable model. A 7 billion parament model costs far less to run. Summarize.tech must have realized a fine-tuned smaller model would cost less. That would be true whether it was a proprietary or open-source model.
While The Information’s reporting is not compelling on this point, the core thesis is important. Generative AI costs scale quickly with use. The more popular the generative AI application or feature, the more incentive there will be to move to a custom or open-source model. With that said, any organization that wants to do this, will have to develop some internal capabilities around training, operating, and monitoring LLMs. Those skills are in short supply today and accessing them through external experts may exceed the cost of relying on proprietary models.
In addition, if you run your own models, you do not benefit from the advances in quality and cost reduction that the proprietary model developers will roll out over the next two years. This is a key decision that companies need to make right now. It is also a reason why Amazon Bedrock and Google’s Vertex AI may succeed. These will be far easier ways to access open-source LLMs than setting up and maintaining your own infrastructure.
Cost will emerge as a key point of competition as quality meets basic thresholds of performance across the LLM product landscape. This is a key reason why the reported failure of OpenAI’s Arrakis LLM cost-reduction project could become an issue. Despite that setback, OpenAI did reduce prices in June and it is likely you will see another round of price cuts in 2024.
Software Companies vs Enterprises
Another variable to consider is that software businesses like Salesforce and Wix have more incentive to shift to open-source models and more internal capabilities to manage the software stack. Since running software and accessing third-party digital services like LLMs are a significant portion of operating costs, reducing these dependencies is always a priority. These companies also have a deeper bench of software engineering expertise, can justify investment in more machine learning talent, and are more likely to see the generative AI capability as a differentiated offering.
Enteprises confront a different calculus. They also want to minimize cost, but they also must confront complexity and its contribution to total cost of ownership. Software companies may be able to amortize its LLM costs across a substantial portion of its business, particularly as generative AI features become a competitive necessity to match their peers. Enterprises are often adding generative AI features to solve a specific business problem and its costs must be addressed within that context alone.
In addition, many business divisions within companies do not control their own technical resources. These are very often a shared service run through the CIO or CTO office, which have long lists of projects they are already committed to supporting. General managers that want to move quickly, often must reduce dependence on internal resources and that means lower complexity may be the only option. Accessing a proprietary solution generally comes with lower complexity. In the LLM category, OpenAI is the most widely supported and that includes having the most familiarity among software engineers.
These factors suggest that OpenAI may face headwinds retaining software company customers. However, it will likely face fewer challenges maintaining enterprise customers in other industries.
Azure vs OpenAI APIs
The bigger challenge for OpenAI will be retaining customers for its direct APIs when Azure OpenAI Service offers access to the same LLMs wrapped in a variety of enterprise-friendly cloud services. The image from The Information depicted above shows how some customers are using OpenAI exclusively through Azure, through OpenAI, or both. The hypothesis is that a shift to more OpenAI use through Azure is bad for OpenAI because it captures less revenue from those contracts.
However, this ignores the fact that OpenAI incurs less cost for customers that access its models through Azure and does not have to deal with creating elastic infrastructure scalability, meeting onerouse service level agreements, and providing other services required by large cloud computing customers. This is consistent with OpenAI’s business model and will enable the company to focus on building and optimizing generative AI foundation models as opposed to running a hosting business.
Microsoft revealed in its earnings call last week that it now has 18,000 customers using Azure OpenAI service, up form 11,000 in July, and 4,500 in May. OpenAI didn’t have to onboard all of those users, or provide front-line customer support. This suggests robust business growth. In addition, companies accessing OpenAI through Azure are more likely to planning or running production-grade solutions. If they were simply experimenting, it would be just as easy to hit OpenAI’s APIs directly.
My conclusion is that the Azure partnership is working as intended. This should be viewed as a revenue accelerant as opposed to suppressing revenue due to margin sharing with Azure.
The bigger potential issue for OpenAI is for enterprises that don’t use Azure and instead focus on Google Cloud or AWS. These companies can try Azure or go directly to OpenAI’s API. However, if OpenAI opens up relationships with other cloud hyperscalers it would signal a new round of accelerated growth. I know of companies that would prefer to access OpenAI LLMs through AWS, for example. It is unclear whether OpenAI’s agreements with Azure preclude this, but Sam Altman did say in January that its Microsoft agreement was not exclusive.
The Conclusion
The Information article is interesting and hits on some important trends. OpenAI has dominant mindshare and appears to have dominant market share for LLMs at the moment. It will surely lose some market share as lower cost and more performant LLM options arise as credible alternatives. However, its enterprise business is about to take off and Azure will be a big part of that. Azure is not a competitor to OpenAI, it is the company’s biggest go-to-market asset.
OpenAI is on track to generate more than $1 billion in revenue and The Information reports its current twelve-month run rate is $1.3 billion. However, that only considers current subscription revenue and committed API access agreements. It is likely that the Azure fees to OpenAI in 2024 alone will exceed $1 billion. Add that to direct OpenAI API access, ChatGPT Plus subscriptions, and ChatGPT Enterprise fees, and it appears that the OpenAI revenue train will be just fine for the foreseeable future.
Very interesting analysis, thanks for sharing! OpenAI is indeed the first player, but seems the costs of LLMs will be a great factor to drive adoptions among enterprises.