Luiza Nets $10M in Funding for Generative AI Assistant That Lives in WhatsApp and Telegram
It also speaks Spanish, Portuguese, and English
Luiza is a generative AI-powered virtual assistant leveraging large language model (LLM) technology from OpenAI and Meta’s Llama, along with the Kandinsky text-to-image model to offer a ChatGPT-like experience from WhatsApp and Telegram messages. It also started in Spain and rolled out Spanish and Portuguese versions before adding English. This week, the company announced it has raised $10 million in a new funding round.
The new funding brings the company’s total to $13 million, and the cash infusion will be used for funding and to grow its U.S. user base. Luiza will be building off of a solid base of activity according to information released by the company:
Luzia's mission is to be the most reliable and intelligent Personal Assistant that aids users with their everyday tasks and inquiries. The free platform serves as a trusted AI ally, available through WhatsApp and Telegram contacts, to almost 17 million users. Luzia has processed over 900 million messages leveraging APIs from various companies, including OpenAI, Llama and Kandinsky, among others. Responses provide answers, transcribe audio, translate multiple languages and generate real-time images, all while prioritizing privacy and security. As the exclusive contact for harnessing AI's true power, Luzia understands and responds to simple voice or text commands.
Multi-model and Multimodal
The multi-modal approach is interesting, probably a good business strategy, and also leads to inconsistent results. It’s a good business strategy for multiple reasons. First, the potential to reduce cost by employing open-source models where practical, and fee-based proprietary models, when necessary, makes economic sense. Running Llama 2 from Meta isn’t exactly free. Meta does not charge for access, but Luiza still must host the model and run inference jobs, which carry computing costs. However, it will be less costly than paying OpenAI inference charges at any significant scale.
It is also a good strategy because it avoids vendor lock-in. If another more powerful model comes out, Luiza is already designed to access multiple models and can switch more easily. In addition, if pricing or new restrictions are imposed that run counter to Luiza’s interests, the company can also switch to another model.
The multi-model approach is also a prerequisite for providing multimodal text and image capabilities. You may note that the Kandinsky text-to-image model is open-source and will provide the company with lower costs and more control.
A potential downside of the multi-model approach is inconsistency. I tried out the WhatsApp version and found that fact-based questions related to recent history sometimes answer questions correctly and other times reverted to OpenAI’s standard fallback apology that it doesn’t have “access to real-time information.” Except, that it isn’t true.
A question about who won the pole position for the U.S. Grand Prix a few hours ago returned the correct answer—Charles Leclerc. However, when I asked who won the Formula 1 drivers’ championship for 2023, it said it could not answer the question. And it incorrectly responded that the pole position was the seventh of Charles Leclerc’s Formula 1 career. He has won twenty-one.
So, there may be some work to do on reducing hallucinations. Perplexity had no problem with any of these questions, nor did Bing Chat, or ChatGPT with the Bing web browsing plugin. Google SGE was correct on the pole position but incorrectly listed Leclerc’s Formula 1 career total as 20. It was close, but Google SGE failed to count today’s pole position in the total.
The final multimodal element is voice input. You can use a voice message and will transcribe your message and then respond with text. There is no audio output at this time, but voice input is supported.
Meeting Customers Where They Are
The more intriguing factor for Luiza is where you access it. Instead of asking users to download an app, the company directs them to WhatsApp and Telegram. It is the lowest friction approach I have experienced with generative AI chat assistants. You just click to join a chat with Luiza and start using it like ChatGPT.
This channel method means mobile users do not need to change their habits to use Luiza. There is no need to download an app, register, and remember to access it when you have a question or a task. Users can access Luiza from an app they already use every day.
Luiza is also significant because of its language support. While the company is expanding its English-language offering, the core service is Spanish and Portuguese. These markets are less well served today than the English language, so it is interesting to see the company originate from that angle. Granted, this is not a surprising go-to-market approach for Luiza, given it is headquartered in Madrid.
Given the 17 million users for an early-stage startup and its interesting product and market strategy, Luiza looks like an assistant that is worth tracking.
If you would like to read more about generative AI assistants and voice assistants, check out my article on LinkedIn from earlier today: Does ChatGPT Mark the End of the Voice Assistant Era, or is it a False Comparison?
Super curious, since it can be accessed freely on Whatsapp and you don’t have to sign up, what ways are there to monetize? It sounds like they are currently focusing on growing their user base, but how can they achieve profitability in the long run?
I am using it en español and it looks great!