Anthropic Closes $450 Million in Funding, Bringing Total Funding to $1.8 Billion
It costs a lot to compete with Microsoft and Google
Anthropic announced yesterday that it had raised a $450 million funding round led by Spark Capital with additional funding from Google, Salesforce Ventures, Zoom Ventures, and others. You may recall that Zoom mentioned an undisclosed investment in Anthropic when it added the large language model to its features last week.
This brings Anthropic’s funding so far in 2023 to over $1 billion.
May 2021 - $124 million
April 2022 - $580 million
February 2023 - $300 million (Google)
March 2023 - $300 million (Spark)
May 2023 - $450 million (Spark, Google, et al.)
Context (Funding) Window
The announcement comes just two weeks after it announced an industry-leading context window of 100,000 tokens or approximately 75,000 words. For some perspective, The Great Gatsby was uploaded into Anthropic’s context window with 72,000 tokens.
The large context window is attractive for several reasons.
It enables users to query or summarize larger tracts of text without having to add them to a knowledge base, saving time and cost.
It provides the opportunity to maintain longer conversational interactions without losing context. This was an issue early on for Bing Chat.
It can help mitigate the incidence of hallucinations by guiding the LLM to draw its answer from the prompt instead of its vector database.
It offers more space for including meta prompts to increase user personalization.
The key downside of large context windows is you pay for all of those tokens (i.e., words) you add to the prompt, and response latency rises. However, you can see in the video below the immediate value that a context window can offer.
Safety First
While Anthropic’s large context window is a clear differentiator for its LLM Claude, the company also stresses its different approach to AI safety. While OpenAI tends to rely on reinforcement learning from human feedback (RLHF), rule-based reward models (RBRM), and monitoring of datasets, Claude is architected with the concept of Constitutional AI. This is supposed to enable the model to automatically identify safety issues based on a set of principles and iterative learning. A key hypothesis is that constitutional AI is more scalable are more aligned with how humans develop values.
With that in mind, Anthropic positions itself as both a product company and an AI safety research organization. It is also organized as a Public Benefit Corporation, so profit cannot be its primary purpose.
Funding First
From OpenAI’s experience, Anthropic learned the importance of large-scale funding to support LLM development. OpenAI started as a non-profit and then switched to a for-profit company to raise more funding from companies like Microsoft. It is costly to train and run LLMs. Competing in a market where competitors include Microsoft, Google, Amazon, and Nvidia is also costly. Just look at Microsofts dozens of announcements with OpenAI this week.
Google’s continued presence in Anthropic’s funding rounds suggests it is hoping to hedge its bets. If Anthropic is super successful, Google will have an inside track ahead of its tech giant rivals to acquire the technology and employ it in similar ways to Microsoft’s relationship with OpenAI. Even if Google’s own PaLM and Gemini models prove superior, Google may be able to ensure its rivals don’t have access to Anthropic technology.
The generative AI wars are being waged on many fronts simultaneously—and money matters.