Don't Google It
The company is in danger of becoming the spam it has spent 20 years trying to fight
In a recent post, I Kinda Hate The Internet Now, I lamented the destruction of Google search. What was "a once glorious feature that allowed us to find almost any information, video or image we could think up with the typing of a few words," has, like the rest of the internet experience, been destroyed by a gradual decay.
First came SEO gurus and growth hacking. Then came ads, targeted results and sponsored positions. Then Google itself started playing dirty tricks and manipulating search results, and its monopolistic powers over what is seen and what isn't have resulted in lower-quality search results. Before we knew it, search results couldn't be trusted. These days, you have to scroll an entire page to find the first result that wasn't sponsored, gamed or put there by Google itself. The fundamental concept that something at the top of search results was the best option — something that made Google so well-used that it coined the term "just Google it" — has been dismantled.
It sucks. It sucks so bad that people think asking an AI chatbot questions (which just pulls info from the same website anyways) is a better alternative. That leads nicely to some recent developments that perfectly sum up the sorry state of affairs surrounding Google and offer a window into the continued destruction of anything good left on the internet in pursuit of maximizing shareholder value.
Last week, Google launched AI Overviews, an update to its core product that offers AI summaries to user queries. The theory is simple — when someone types in a search, they no longer need to read any links; Google will use AI to scour the internet for the most relevant info and display it at the top of the results. Easy peasy. Finally, AI can do the one thing it seems half decent at —summarising stuff for people who are too lazy to read it.
Except, it's been a total disaster. Instead of delivering accurate answers, there has been disinformation aplenty. Some notable examples include;
Suggesting you should eat at least one small rock a day,
Offering 'Improve your heart rate' as one of the health benefits of running with scissors,
Declaring which Mario Kart characters are gay,
Listing jumping off a cliff as a way to stay in the air longer,
Suggesting you try jumping off the Golden Gate Bridge as an answer to 'I'm feeling depressed.'
If you want more (and there is plenty more), Joe Youngblood has been tracking the cases of inaccuracies or plagiarism here.
What a failure. The problem is twofold: where Google's AI is pulling the information from and its inability to decipher a good source of truth from a bad one. Some sources are cited from The Onion (which means obvious sarcasm) and Reddit. These should not be assumed truthful, certainly not for a process trying to remove any impetus for the user to fact-check or do any further reading. If Google wants to say, "Hey, here's the TL;DR" and position it in the most coveted spot in search results, it has to be accurate. And yet, there they are: dumb, clearly wrong answers posted by Reddit users years ago, given as a source of truth. Like many companies scrambling in the AI gold rush (which is yet to deliver any gold), they've been sloppy and too quick to press go.
On that shitty data, a quick reminder: Google paid $60 million to Reddit to train its AI on the trove of data on the site. Seriously, has any exec at Google ever been on the platform? Sure, it contains some useful answers, but it's equally full of first-in-class shitposting (and lots of absolute nonsense). As some stellar reporting from 404 Media put it,
"The complete destruction of Google Search via forced AI adoption and the carnage it is wreaking on the internet is deeply depressing, but there are bright spots. For example, as the prophecy foretold, we are learning exactly what Google is paying Reddit $60 million annually for. And that is to confidently serve its customers ideas like, to make cheese stick on a pizza, "you can also add about 1/8 cup of non-toxic glue" to pizza sauce, which comes directly from the mind of a Reddit user who calls themselves "Fucksmith" and posted about putting glue on pizza 11 years ago."
Slapping an overly confident yet often wrong chatbot, trained on shitty data, into a service that's been marketed for decades as a portal to the world's knowledge is reckless.
It could even be legitimately dangerous.
For many, Google is the source of information. I'd hedge that most non-tech-savvy people don't understand or care for the factors and algorithms behind search results and naively assume Google has their best interests at heart. Of course, they don't. How long before the first instance of someone taking this shitty advice as gospel?
Google has responded in a predicatable fashion, blaming users for trying to get it to mess up. So what if they are? That's fair game. The system should be able to withstand that scrutiny, and if it can't, it shouldn't have been released. Its defense that the failure of AI search is because the queries being asked are "uncommon" is total bullshit.
There's no denying this is a major misstep from a company that's reputation is a farcry from what it once was.
It's also another body blow for AI and its attempts to convice the wider public that it's a genuine technological leap that will offer untold benefits. What we've seen here is where we're really at with AI — companies desperate to find a way to use it, regardless of whether anyone asked for it or wanted it. Google's AI Overview is just another example of AI-stuffing that is making the core product worse, of a corpo juggernaut releasing things not for the sake of customer benefit but to please shareholders, of following the hype train so that execs get to shout buzzwords over and over and over (the term AI was used over 124 times in company's last 2-hour keynote).
Google is in danger of becoming the spam it has spent 20 years trying to fight, all to chase a trend showing signs of peaking already.
What a mess.
Maybe it's time to start "Yahooing it."
Or perhaps that's even worse.
Maybe I’ll try “Ducking it” for now.
One reason these AI results can be so bad is simply that the AI isn't, actually, intelligent. It does not know what a rock is or what relationship it has to eating or what eating is, so it can suggest eating rocks. To accurately summarize something you kind of need to know what you are summarizing; what an "AI" is going to do is just talk about anything that the internet has talked about in the immediate vicinity of whatever you asked about.
Aside from the questions around why Reddit should keep those $60 mil while the information was actually supplies by *its users*, maybe this will teach people to actually learn to think and fact-check by themselves again! Or it's going to be a process of very quick selection of the fittest.