Google’s new AI search prone to ‘hallucinations’


Tech giant Google is going all-in on artificial intelligence with the rollout of an experimental AI-powered search feature that experts say could give the company unprecedented power over what we see online.

Instead of directing users to a page with outgoing links to diverse sources, the new search function uses an AI language model to answer your questions directly, pulling its information from numerous articles online.

The convenience of reading an AI result instead of clicking through websites, in addition to other search giants like Microsoft Bing also incorporating generative AI, has some sources believing its mainstream adoption is almost inevitable. Yet experts say to take the robot’s advice with a grain of salt — the language model, like that powering ChatGPT, is prone to presenting false information as fact, termed “hallucinations,” and even manufacturing sources.

For now, the test feature is only available to certain Chrome and Google app users in the U.S. “We hope to expand outside the U.S. soon,” the company says, although it’s unclear when. The test run will end in December.

How does Google’s generative AI search function work?

Those approved for the test can use Google as usual, except a box with an AI-generated “snapshot of key information” will appear before any articles on their search subject, the company said. Unlike ChatGPT, Google’s AI cites the sources it pulls from and makes them available for deeper reading.

Below the AI snapshot are options to ask followup questions and suggested next steps — clicking on these switches the site to a new “conversational mode” where users can ask further questions and receive context-specific answers.

A screenshot of Google's new AI search function, which provides an AI summary of your query pulling its information from sources across the web.

Daniel Russell was a senior research scientist for Google who worked on the search engine from 2005 to 2023. He’s now a professor at Stanford’s Institute for Human-Centered Artificial Intelligence. “I’d be surprised if (generative AI search) is not a dominant factor by the middle of next year,” he told the Star in an interview.

Unlike classic search engines, which provides links based on short keywords in your query, Google’s new feature works like a chatbot — able to understand nuanced phrases and full sentences, Russell said, while replying in natural-sounding prose.

The new search function runs off Google’s next-generation Pathways Language Model 2 (PaLM2) and another technology known as a Multitask Unified Model, or MUM.

Google and its parent company Alphabet Inc. did not respond to the Star’s requests for comment before publication.

Don’t believe everything you read online

While a generative AI-powered search engine might feel more convenient, it comes with its own pitfalls, Russell said — as a large language model, Google’s AI is prone to manufacturing false information, what computer scientists call “hallucinations.” It’s a common problem with AI chatbots, from ChatGPT to Google’s own Bard.

“One of the biggest problems with (AI search) is when you get a page of what looks like reasonably well written prose, you tend to believe it,” Russell continued — but it’s not always true. “In general, you don’t know where that (information) truly came from … You don’t actually know how it’s being stitched together, how it’s being evaluated, but it sounds incredibly plausible. And that plausibility is a problem.”

And although Google’s AI cites its sources, it’s possible these are manufactured too.

“In my testing (of Google’s new feature), it made up a citation with a fake journal name, fake page numbers and a fake publication date,” Russell said. “ … Having said that, I think they will get a handle on this and basically make it go away. But it’s not quite there yet.”

Joel Blit is an associate professor studying AI and the economics of innovation at the University of Waterloo. As AI search takes off, users will need to get into the habit of fact-checking their search results, he said.

“It’s a large language model — there’s always a risk that it’s going to hallucinate and give you something that is incorrect,” Blit continued. “ … There will also be cases, of course, where (Google’s) synthesizing the information in a way that may contain bias, that omits important things.”

And although the feature will link out to websites containing more information, Blit finds it unlikely a majority of users would click on them, settling for Google’s AI summary.

Google and its information monopoly

Google already has a “tremendous amount of control over what everyone in the world is seeing,” Blit said — with the introduction of Google AI, the company could wield unprecedented control over what information people see online.

“If they end up having a monopoly on sort of this new version of a AI powered search, you really have to worry about many people only getting whatever they’re looking for from a single source — and it’s going to be more highly edited than ever before,” he continued.

“I think they’re trying to be as responsible as possible,” Blit said. “But there’s lots of issues — and as a civil society, we shouldn’t just trust them to do the right thing.”

Russell said the problem is neither new nor limited to Google — the company grappled with it in classic search when deciding which sources to show, and so did its search engine peers.

“We’re still searching for a solution, and it’ll take a long time. There’s no easy fix for this,” he said.

AI search will disrupt the content creation industry

Because searchers will likely settle for Google’s AI summary, the dozens of articles used to create that blurb get one less click. That could mean the journalists, bloggers and other content creators relying on traffic through classic search engines may soon be in for a harsh surprise, with Blit forecasting dramatic drops in readership.

“I see this technology creating winners and losers — and the biggest losers are the content creators,” Blit said. “I wouldn’t be surprised if, for a lot of these content providers, their traffic goes down by 50 per cent or more in the coming years.”

Russell said solving this issue will be the “million dollar question,” adding that there may need to be an entirely new monetization model to account for AI.

“A potential model would be to have micropayments based on impressions,” Russell suggested — giving a cut to publishers that Google’s AI cites, based on how many people view its summary. “Something like that will have to exist,” he said — AI needs a steady supply of articles to function.

“I think most people are going to love the service. It’s going to be incredibly convenient, incredibly useful.” said Blit. “But hopefully, we can start having this discussion around what we do need to worry about and what different ways could we address it.”


Conversations are opinions of our readers and are subject to the Code of Conduct. The Star
does not endorse these opinions.


Leave a Reply

Your email address will not be published. Required fields are marked *