How to Access Google Bard, What's New, and Will LaMDA Be Replaced
Can LaMDA match GPT-4 and the New Bing
Google announced yesterday that it is opening up access to Bard, its conversational search feature based on the LaMDA large language model (LLM). In other circumstances, Synthedia might have issued a second post for Tuesday as breaking news, but we are bringing it to you now since it is just another waitlist. Thankfully, some people did get access fairly quickly, so maybe you too will soon have access.
You can sign up here. However, one tip before you do. Bard is not accessible through corporate Google Workspace accounts. So, if you pay for Google’s productivity applications, you cannot access Bard through that account. You need to sign up with a free account, which may mean you need to log out of your business email and log back in with a personal account. Not everyone is happy about this.
BTW - Tom Hewitson frequently offers unique insights on developments in this space and has been working at the intersection of AI and applications for several years.
Lowering Expectations
One of the most interesting elements of this announcement is how much it focuses on the idea that Bard may be wrong. In the main image at the top of this post, you will see Bard introduces itself with the admission that it may be wrong.
I’m Bard, your creative and helpful collaborator. I have limitations and won’t always get it right, but your feedback will help me improve.
The post also included the image below with the disclaimer that “Large language models will not always get it right.” In this case, they are expanding the context to say that errors in responses are not just an issue Bard faces; they are an emergent feature of the LLM category.
This shift in messaging comes after the Bard introduction in February, where Google was widely criticized for showing a Bard response that included an error. Of course, other LLMs also produce errors. Google is held to a different standard regarding search, and they made the mistake of not validating the canned demonstrations properly and not being upfront about the limitations.
This is particularly important now as Bard is going into a general user base, and errors will start frequently appearing in news articles and social media posts about the solution. Ben Schoon from 9to5Google provides several examples, including:
Some of the mistakes I saw Bard make were as simple as an incorrect figure. For instance, a question about the Pixel 7 Pro saw Bard telling me that Tensor G2 was built on a 4nm process, something that’s simply not true. There are also plenty of errors that just go against common sense, such as Bard implying the Pixel 7 and Pixel 7 Pro haven’t been released.
Google didn’t limit its discussion of LLM shortcomings to factual errors. It also mentioned bias. “Because they learn from a wide range of information that reflects real-world biases and stereotypes, those sometimes show up in [LLM] outputs.”
These are similar to the disclaimers that OpenAI included on the ChatGPT landing page at launch. Even today, before logging in, ChatGPT shows a dialogue box disclaimer in addition to on-page warnings, and Bing Chat includes similar language on its landing page.
ChatGPT disclaimer: “While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.”
Bing Chat disclaimer: “Let’s learn together. Bing is powered by AI, so surprises and mistakes are possible. Make sure to check the facts, and share feedback so we can learn and improve.”
If this is Google’s shift to being more transparent and humble, it is a smart change in communications strategy. Lowering expectations can be painful in the short term when everyone criticizes you for being late to market with an inferior product. However, it makes it easier to exceed expectations later and start to re-establish the perception of technology leadership. Plus, you might as well draft off the standard set by your market competitors and lower those expectations.
Granted, Schoon from 9to5Google also says, “What’s frustrating is that Google Bard doesn’t cite its sources. While Bing shows links to where it pulls information throughout, Bard only occasionally shows a link to where its information came from. Maddeningly, you can’t even manually ask Bard to show that information.”
When Schoon asked Bard to cite its sources, Bard responded, “I’m designed solely to process and generate text, so I am unable to assist you with that.” Bing Chat provides sources for every query, and each one I have previously clicked on was accurate.
This suggests Bard is well behind Bing Chat at the moment and even Perplexity.ai for the search use case where citing source information is important. Without it, Bard is hardly better than ChatGPT, which specifically says it is not a search engine.
Is LaMDA on the Way Out?
Another interesting comment in the Bard introduction related to the potential for future LaMDA LLM updates to include new models.
Bard is powered by a research large language model (LLM), specifically a lightweight and optimized version of LaMDA, and will be updated with newer, more capable models over time.
In some regards, this is an obvious statement. GPT-3 was updated with InstructGPT, GPT-3.5, and GPT-4. OpenAI has updated ChatGPT since its November 2022 launch. AI21 Labs just updated its Jurassic models. This statement may simply mean that LaMDA will be updated with newer versions of LaMDA. However, the way it is phrased suggests it could be replaced with another LLM.
You may have noticed that Google’s announcement for its new generative AI Workspace features in Docs and Gmail are powered by the PaLM LLM. PaLM is a larger model in terms of parameters, was trained on different data, and was designed for a different purpose. Bing Chat summarizes the differences below.
Perplexity.ai offered a more direct answer, responding, “PaLM and LaMDA are both large language models (LLMs) developed by Google, but they have different architectures and purposes. PaLM is a more versatile model that can be used for a wide range of use cases, while LaMDA is specifically designed for dialogue.”
Given these differences, it makes sense that PaLM is used for productivity application features, and LaMDA is powering the chat-oriented Bard. However, if PaLM can produce response quality closer to GPT-4—the model powering Bing Chat—Google may decide to deliver a Bard update based on PaLM.
Consider the expanded result from Perplexity that highlights Google’s Pathways model and how it differentiates the PaLM architecture.
Pathways is a novel AI architecture that can handle many tasks at once, learn new tasks quickly, and reflect a better understanding of the world.
Google’s blog introducing PaLM states:
Pathways could enable multimodal models that encompass vision, auditory, and language understanding simultaneously. So whether the model is processing the word “leopard,” the sound of someone saying “leopard,” or a video of a leopard running, the same response is activated internally: the concept of a leopard. The result is a model that’s more insightful and less prone to mistakes and biases.
And of course an AI model needn’t be restricted to these familiar senses; Pathways could handle more abstract forms of data, helping find useful patterns that have eluded human scientists in complex systems such as climate dynamics.
Aren’t these the types of capabilities that would be helpful in search? You may recall that OpenAI’s Sam Altman has suggested GPT-4’s multimodal capabilities are critical to producing richer results to user queries.
The question may boil down to the purpose of Bard. Remember that it is being simultaneously compared to ChatGPT and Bing Chat. The former is a conversational chatbot that can deliver information, summarization, and creative writing suggestions, but OpenAI says it should not be relied on for search use cases. The latter can perform similar tasks but is primarily tuned as a conversational search engine. What does Bard want to be?
It seems obvious that Google should employ whatever model helps it excel at the search use case. Consider it a plus if Bard can provide the other features in the same service. However, Google could always provide these features as separate services. Google should realize that the search use case will largely shape consumer perception of the company’s generative AI capabilities.
Let me know what you think about Bard when you get access. Also, let me lower your expectations. I have used LaMDA previously, and from what I have seen thus far from Bard, the product is less mature than Microsoft’s and OpenAI’s products. With that said, Google has a track record as an exceptional fast-follower, and it has multiple LLMs at its disposal. The challenge for Google now is to choose the best model for the task and get the use case alignment right.