Google's AI made up a fake cheese fact that wound up in an ad for Google's AI, perfectly highlighting why relying on AI is a bad idea
-
This post did not contain any content.
-
T [email protected] shared this topic
-
-
The weirdness came partway through, when the ad actually showed Google Gemini in action. It told the cheese vendor that Gouda accounts for "50 to 60 percent of the world's cheese consumption." Now, Gouda's hardly a hardcore real head pick like Roquefort or BellaVitano, but there's also no way it's pulling in cheddar or mozzarella numbers. Travel blogger Nate Hake and Google-focused Twitter account Goog Enough documented the erroneous initial version of the ad, but Google responded by quietly swapping in a more accurate Gemini-suggested blurb in all live versions of the ad, including the one that aired during the Super Bowl.
-
-
-
Slightly off topic, but the writing on this article is horrible. Optimizing for Google engagement, it seems. Ironically, an AI would probably have produced something vastly more readable.
-
That is exactly the point, LLM aim to simulate the chaotic best guess flow of the human mind, to be conscious and at least present the appearance of thinking and from that to access and process facts but not be a repository of facts in themselves. The accusation here that the model constructed a fact and then built on it is missing the point, this is exactly the way organic minds work. Human memory is constantly reworked and altered based on fresh information and simple musings and the new memory taken as factual even while it is in large part fabricated, and to an increasing extent over time. Many of our memories of past events bear only cursory fidelity to the actual details of the events themselves to the point that they could be defined as imagined. We still take these imagined memories as real and act upon them exactly as has been done here by the AI model.
-
LLM is a random person in the internet, or the first link on a search.
If you wouldn't blandly trust them, don't trust it.
-
-
-
Especially considering that the "pointing out of said hallucinations" comes much later than when they're shared. And NEVER made it as far and wide as the initial bullshit.
-
-
-
That user goes around doing weird and pointless corrections to other people's comments, so I thought it'd be funny to do the same in turn.
-
They are also amazing at generating configuration that's subtly wrong.
For example, if the bad LLM generated configurations I caught during pull requests reviews are any example, there are plenty of people with less experienced teams running broken kubernetes deployments.
Now, to be fair, inexperienced people would make similar mistakes, but inexperienced people are capable of learning with their mistakes.
-
Can take the user off reddit, but the reddit never leaves the user