CalypsoAI Raises $23 Million for AI Moderator and Security Solution
Generative AI risk mitigation tools
CalypsoAI announced a new funding round of $23 million led by Paladin Capital Group with participation by Lockheed Martin Ventures and others. The company said the new funding will be applied to product development, hiring, and go-to-market. The new financing round brings the company’s total funding to $38.2 million.
Calypso Moderator is the company’s new product, introduced in April 2023. It lists a variety of features, ranging from data loss prevention and data quality verification to jailbreak prevention, policy enforcement, and malicious code detection. VESPR Validate is a government-oriented tool for stress-testing ML models.
These products are part of a larger category that Calypso calls AISec. While the Moderator solutions target large language model (LLM) management use cases, AISec is a broader concept about securing, monitoring, and managing any type of machine learning model.
Monitoring LLMs
A recent company video drives home a key message, “With great technological advancement comes great vulnerability.” Calypso intends to fill the gap for AI model security, control, and policy management tools that companies deploying LLMs for enterprise use cases need. Neil Serebyany, CEO of CalypsoAI, commented during a Twitter video interview with Mourad Yesayan of Paladin Capital Group:
As all of these customers start to deploy and use more and more and more of these large language models, a lot of their questions really get down to how do I make sure my data doesn’t get used for retraining the models? How do I make sure I have guardrails across all of my usage of all of these large language models? And how do I make sure that I understand and have the ability to audit and trace all of the usage of large language models across my enterprise? Those are the key concerns and the key stumbling blocks for enterprises and government agencies.
Calypso identifies financial services, insurance, and government organizations as key customer segments. According to TechCrunch:
CalypsoAI claims that its tools — deployed as a container within an organization’s infrastructure — allow businesses to monitor and shape the usage of large language models such as ChatGPT via dashboards that show stats related to the toxicity of the models, user engagement and more. Serebryany says that CalypsoAI can prevent sensitive company data from being shared on models while identifying attacks coming from generative AI tools.
“While every company wants to reap the benefits of generative AI solutions — namely the clear productivity gains — they also want to make sure they aren’t subject to cyberattacks and that employees don’t expose sensitive information to public models,” Serebryany added. “By implementing CalypsoAI, CISOs and IT leaders can start to seriously consider implementing generative AI solutions across their organizations in a safe and secure manner — allowing them to introduce efficiencies into their business without the added risk.”