The Story Behind CoreWeave's Rumored Rise to a $5-$8B Valuation, Up From $2B in April
Generative AI cloud computing is hot as GPU scarcity persists
CoreWeave raised $221 million at a $2 billion pre-money valuation in April. The included funding from NVIDIA and was led by alternative asset investor Magnetar. The terms of the deal were not disclosed. It is fair to assume the Magnetar investment was not based on unsecured equity alone.
Bloomberg just reported that the cloud computing service specializing in generative AI is negotiating a new equity sale at a $5-$8 billion valuation. What is the rationale behind a 2.5-4x valuation rise in just four months?
The company is projected to generate $1.5 billion in 2024 revenue, though there is no information on how that compares to 2023 or verification from the company.
CNBC reported in June that “Microsoft has agreed to spend potentially billions of dollars over multiple years on cloud computing infrastructure from startup CoreWeave.”
Earlier this month, CoreWeave secured $2.3 billion in debt financing.
The company set a new benchmark record on the MLPerf test using a cluster of 3,500 NVIDIA H100 Tensor Core GPUs training an Inflection AI LLM.
NVIDIA’s CFO and CEO have both mentioned CoreWeave publicly, and the GPU maker has invested more than $100 million in the “startup.”
What is Going on Here?
The uncharitable view is that the latest equity sale rumors are designed to offer company founders liquidity and that much of CoreWeave’s existing funding is secured against hardware assets. This may seem like a red flag if you are more familiar with software equity investments. However, it is not unusual for asset-intensive businesses such as cloud computing.
Michael Intrator, CoreWeave’s CEO, wrote in a blog earlier this month:
Led by Blackstone and Magnetar with participation from Coatue, DigitalBridge, BlackRock, PIMCO, Carlyle and Great Elm, we secured a $2.3 billion debt facility, which we will commit entirely towards purchasing and paying for hardware for contracts already executed with clients and continuing to hire the best talent in the industry.
This debt facility provides us the financial headroom and flexibility we need and underscores our company’s competitive positioning, unparalleled capabilities, and readiness to scale, as we acquire cutting edge technology for AI and model training purposes.
In 2017, we bought and plugged in our first GPU…We grew quickly and took our expertise in running tens of thousands of GPUs to build and bring the world’s first specialized cloud to market…No one was expecting this level of demand for GPU compute, but our strategic investments to increase capacity continue to pay off – and we’re delivering where others cannot.
We’re mobilizing quickly to expand capacity – we recently announced a new $1.6 billion data center in Texas, our first facility in the state…Just last month, CoreWeave delivered record-breaking performance results on the MLPerf™ benchmark, the reputable third-party benchmarking consortium. Working in partnership with NVIDIA, we unveiled the world’s fastest supercomputer that trained an entire GPT-3 LLM workload in under 11 minutes. The infrastructure that powered the MLPerf submission is used by Inflection AI to create one of the world’s most sophisticated large language models (LLM) on CoreWeave Cloud.
Consider this other recent comment by CoreWeave’s chief technology officer in a Wall Street Journal interview:
“We had spent $100 million on H100s. But the ChatGPT moment was when I was, like: ‘Everything we’ve thought from a scale perspective may be totally wrong. These people don’t need 5,000 GPUs. They need five million.’ ”
CoreWeave has experience efficiently scaling GPU performance and has access to a scarce commodity, NVIDIA GPUs. While NVIDIA announces record sales each quarter, it cannot keep up with demand for the H100 chips. According to a WIRED story this week:
Everywhere, engineering terms like “optimization” and “smaller model size” are in vogue as companies try to cut their GPU needs, and investors this year have bet hundreds of millions of dollars on startups whose software helps companies make do with the GPUs they’ve got. One of those startups, Modular, has received inquiries from over 30,000 potential customers since launching in May, according to its cofounder and president, Tim Davis.
“We live in a capacity-constrained world where we have to use creativity to wedge things together, mix things together, and balance things out,” says Ben Van Roo, CEO of AI-based business writing aid Yurts.
CoreWeave placed early orders for the first batch of H100s and apparently placed a large re-order before its cloud computing peers recognized how quickly market demand was rising. Its investment relationship with NVIDIA and the favor of its CEO is likely to put the company near the front of the line for NVIDIA’s next-generation GH200 GPU. The company’s recent debt financing can be used to support the new data center construction and a pre-order for a significant number of GH200s.
Subscribe to Synthedia for free! Get the story behind the story in generative AI.
It is unclear whether the rumors of the giant valuation will be validated with a signed deal. It is also unclear whether CoreWeave’s market strength can be sustained once the GPU shortage is resolved. However, it seems clear that CoreWeave has a market advantage in GPU capacity at the moment, which may be enough to drive a higher valuation. The record-setting MLPerf benchmark may indicate it has expertise that could offer longer-term advantages.
I suspect the latest equity sale will not reach the lofty numbers published in the Bloomberg story. We have seen previous stories about Stability AI and Runway where the rumored valuations were not realized. With that said, given the market dynamics, it does seem likely the valuation has risen, and maybe significantly.
Beyond the valuation, this story highlights a key factor that will determine the rate of generative AI adoption that isn’t getting enough attention. GPU capacity and its rising cost, driven by shortages, may be the biggest near-term barrier to adoption. In addition, any foundation model provider that has already trained its model may have an advantage as competitors queue up for capacity or are forced to pay premium rates to bring new products to market.