Intel Launches Articul8, a New Company Offering a Secure On-prem Generative AI Stack
For enterprises that want more control over their generative AI solutions
Intel has launched a new company offering a full-stack generative AI capability that enterprises can deploy without external dependencies. DigitalBridge is the lead investor in the new company, with additional participation from Fin Capital, Mindset Ventures, Communitas Capital, GiantLeap Capital, GS Futures, and Zain Group.
Most generative AI solutions require connectivity to a hosted large language model (LLM) or cloud environment that potentially exposes company data or processing capabilities to third parties. Intel is betting that a number of companies will want to keep at least some of their generative AI solutions cordoned off from any outside connection and maintain full control over the technology stack and their own data. According to the announcement:
Articul8 AI [is] an independent company offering enterprise customers a full-stack, vertically-optimized and secure generative artificial intelligence (GenAI) software platform. The platform delivers AI capabilities that keep customer data, training and inference within the enterprise security perimeter. The platform also provides customers the choice of cloud, on-prem or hybrid deployment.
Articul8 was created with intellectual property (IP) and technology developed at Intel, and the two companies will remain strategically aligned on go-to-market opportunities and collaborate on driving GenAI adoption in the enterprise. Arun Subramaniyan, formerly vice president and general manager in Intel’s Data Center and AI Group, has assumed leadership of Articul8 as its CEO.
Value Proposition
Articul8 says it is founded around four principles: speed, scale, security, and sustainable cost. A key idea expressed by Articul8 is that many of the barriers to AI adoption are based on risks of exposing company intellectual property or data to third parties. A full-stack solution that can be deployed anywhere can overcome that hurdle and provide a more secure implementation. Dr. Arun Subramaniyan, Articul8 CEO and founder, commented in a video overview (see above):
If you are working with your partner, do you really know where your data lives? Does it align to the commitments you’ve made to your customers and regulatory requirements?
One answer to these questions is to implement your own LLM instance, trained on your data, and run on hardware you own and manage. However, few companies have the knowledge or skills to do this with internal teams. Intel, of course, would like to highlight the ability to run this on its processors, such as Gaudi2, though it says the solutions are “infrastructure and hardware agnostic.”
In addition, Subramaniyan says that some enterprises have confronted the potential for high costs associated with generative AI-enabled solutions with the industry’s prevailing pricing models.
Pay-as-you go pricing models can be challenging in general. What seems reasonable at a small scale becomes unsustainable when you apply it to your business needs. A proof of concept might just cost a few hundred dollars, but move to a production environment and it will move to a few million or tens of millions if you are building your own models. On the flip-side, an enterprise contract with a provider might be a steep up-front cost, but ends up being a much cheaper option at scale.
Product
Intel’s announcement and information released by the company are vague on the scope of the offering. Reuter’s reported:
The new entity, which will not be publicly traded and will be called Articul8 AI (pronounced "Articulate AI"), is an outgrowth of work on corporate AI technology that Intel initially carried out with Boston Consulting Group (BCG).
Using one of its own supercomputers, Intel developed a generative AI system that can read text and images using a combination of open-source and internally developed technology. Intel then modified that system to run inside BCG's own data centers to help address BCG's privacy and security concerns.
Intel previously released the Neural-chat-7B small LLM or (SLM) and briefly held the top spot on Hugging Face’s Open LLM leaderboard. That model is based on the Mistal 7B open-source LLM. However, Articul8 doesn’t say which LLM it offers, just that it has a full-stack deployable solution. Its earlier deployment with BCG pre-dated the release of Mistral 7B.
So, it appears to include an LLM but may enable users to bring their own model. The materials and video also suggest that services may be associated with the offering and up-front licensing costs. The indication is that Articul8 is providing a turnkey solution.
Control = Differentiation
The turnkey, secure, and fully enterprise-controlled generative AI installation are the differentiating elements of Articul8’s announcement. Enterprises have many options when it comes to cloud-hosted LLMs provided by third parties. However, that means they are ceding significant control to the model developer, the cloud hosting company, or both. Articul8 and Intel are betting there is a market for companies that want full control of at least some of their generative AI solutions but are not ready to set up everything on their own. This is a logical strategy.
Articul8 highlights several industries ranging from finance, aerospace, telecom, and semiconductors, to government and life sciences, that may require greater control over their generative AI solutions. Whether these companies feel they must have full control to meet their objectives is an open question as is whether Articul8 will be the company where enterprises will place their trust. Still, this is sure to be a growing niche in the generative AI landscape.
I suspect they're lying about Security with prompt injection being a thing.