Generative AI Search Already Shows That SEO is Changing - New Report
43% of generative AI search answers in recent study included brand mentions
Search engine optimization (SEO) is a core element of marketing strategy. Showing up on the first page of search results for your brand and key industry terms is often a competitive advantage. Absence can be a competitive liability. However, the advent of generative AI search that leverages large language models (LLM) for delivering synthesized answers is ushering in a new set of rules for marketing success.
ROAST, a UK-based performance marketing agency, released a new analysis of generative AI search results comparing Google Bard and ChatGPT for 500 questions related to travel in the UK. Key findings include:
Google is far more likely than ChatGPT to mention brands in their results. 43.2% of Bard’s results mentioned the name of a brand, hotel or a company. While only 14.40% of ChatGPT’s results mentioned the name of brands. For example, questions such as “how long is the flight to Malta” gives the answer. “The average flight time from London to Malta is 3 hours and 13 minutes... Here are some of the airlines that offer direct flights from London to Malta: Air Malta, British Airways, easyJet, Wizz Air”.
…
The report also found Google’s answers to be much more detailed and formated to a high standard. Often using bullet points, tables, as well as pros and cons lists. Whereas the output from ChatGPT could often be very short or take on a persona within the response.
Bard is Big on Brands
Bard’s generative search results delivered three times more brand mentions than ChatGPT. British Airways was the standout for Google Bard and also led for ChatGPT, where all brands showed up with relatively low frequency.
By contrast, British Airways’ 85 mentioned was more than double its closest rival, American Airlines. Delta, United Airlines, Easyjet, Virgin Atlantic, and Turkish Airlines were all close beyond.
Bard Offers More Detail
The report also found that Bard offered significantly more detailed responses than ChatGPT. Bard averaged 218 words per answer, and ChatGPT just 31, as seen in the chart at the top of this post. That translates into about 1350 characters for Bard responses and 169 for ChatGPT, or about six characters per word for the former and five for the latter.
Both systems performed well to give answers to the questions … We found Google’s answers to be much more detailed and formatted to a high standard, often using bullet points, tables, as well as pros and cons lists.
Generative Search vs Voice Search
ROAST also found that generative AI search was more likely to answer questions than voice search. ChatGPT and Bard answered 98% and 99% of the 500 questions in the study, respectively. In past voice search studies conducted by the agency, only 50% - 75% were answered.
This suggests that generative AI search may be more productive simply because it will answer significantly more questions. That performance should help drive consumer adoption.
See Google SGE in Action
Google Bard and ChatGPT are chatbots with answer capabilities but are not true search engines today. ChatGPT does this through a real-time connection to Bing for Plus subscribers. The real search engines for generative AI search are Bing Chat, Perplexity AI, and Google SGE. I did an in-depth breakdown of Google SGE (i.e., Google search with generative AI answers) in a recent YouTube video and compared it to these alternatives.
Let me know what you think.
Could we get a source link to the article?