Google's AI made up a fake cheese fact that wound up in an ad for Google's AI, perfectly highlighting why relying on AI is a bad idea
-
What do you mean its not generating the results? If the summation isn't generated, wheres it come from?
-
I dont want to speak for OP but I think they meant its not generating the search results using an LLM
-
-
Its a very different process. Having work on search engines before, I can tell you that the word generate means something different in this context. It means, in simple terms, to match your search query with a bunch of results, gather links to said results, and then send them to the user to be displayed
-
then send them to the user to be displayed
This is where my understanding breaks. Why would displaying it as a summary mean the backend process is no longer a search engine?
-
They should have kept quiet and let Google show how shit they are on live TV
-
The LLM is going over the search results, taking them as a prompt and then generating a summary of the results as an output.
The search results are generated by the good old search engine, the "AI summary" option at the top is just doing the reading for you.
And of course if the answer isn't trivial, very likely generating an inaccurate or incorrect output from the inputs.
But none of that changes how the underlying search engine works. It's just doing additional work on the same results the same search engine generates.
EDIT: Just to clarify, DDG also has a "chat" service that, as far as I can tell, is just an UI overlay over whatever model you select. That just works the same way as all the AI chatbots you can use online or host locally and I presume it's not what we're talking about.
-
-
Well, yeah, there are multiple things feeding into the results page they generate for you. Not just two. There's the search results, there's an algorithmic widget that shows different things (so a calculator if you input some math, a translation box if you input a translation request, a summary of Wikipedia or IMDB if you search for a movie or a performer, that type of thing). And there is a pop-up window with an LLM-generated summary of the search results now.
Those are all different pieces. Your search resutls for "3 divided by 7" aren't different because they also pop up a calculator for you at the top of the page.
-
Yeah, for some reason I was thinking you were trying to say that bolting on widgets made it no longer a search engine.
-
Trust butt, verify
-
Verified
-
Fire burns and smoke asphyxiates, highlighting why relying on fire is a bad idea.
-
LLMs are good for some searches or clarification that the original website doesn't say. Ex the "BY" attribute in creative commons being acronymed to "BY" (by John Doe) and not "AT" (attributed to John Doe)
-
Stop calling gpt ai
-
That's the inaccurate name everyone's settled on. Kinda like how "sentient" is widely used to mean "sapient" despite being two different things.
-
That is an extremely apt parallel!
(I'm stealing it)
-
I made a smartass comment earlier comparing AI to fire, but it's really my favorite metaphor for it - and it extends to this issue. Depending on how you define it, fire seems to meet the requirements for being alive. It tends to come up in the same conversations that question whether a virus is alive. I think it's fair to think of LLMs (particularly the current implementations) as intelligent - just in the same way we think of fire or a virus as alive. Having many of the characteristics of it, but being a step removed.
-
Altavista was the shit when it came out. My classmates and friends were surprised at how quick I was getting answers or general information. Altavista, that's it. If you're using Ask Jeeves you're going to have a hard time.
I can't remember how I found out about it, but it's what I used until Google came out.
-
This article is about Gemini, not GPT. The generic term is LLM: Large Language Model.