Google's AI made up a fake cheese fact that wound up in an ad for Google's AI, perfectly highlighting why relying on AI is a bad idea
-
LLMs are good for some searches or clarification that the original website doesn't say. Ex the "BY" attribute in creative commons being acronymed to "BY" (by John Doe) and not "AT" (attributed to John Doe)
-
Stop calling gpt ai
-
That's the inaccurate name everyone's settled on. Kinda like how "sentient" is widely used to mean "sapient" despite being two different things.
-
That is an extremely apt parallel!
(I'm stealing it)
-
I made a smartass comment earlier comparing AI to fire, but it's really my favorite metaphor for it - and it extends to this issue. Depending on how you define it, fire seems to meet the requirements for being alive. It tends to come up in the same conversations that question whether a virus is alive. I think it's fair to think of LLMs (particularly the current implementations) as intelligent - just in the same way we think of fire or a virus as alive. Having many of the characteristics of it, but being a step removed.
-
Altavista was the shit when it came out. My classmates and friends were surprised at how quick I was getting answers or general information. Altavista, that's it. If you're using Ask Jeeves you're going to have a hard time.
I can't remember how I found out about it, but it's what I used until Google came out.
-
This article is about Gemini, not GPT. The generic term is LLM: Large Language Model.
-
How is it not AI? Just because it's not AGI doesn't mean it's not AI.
-
I totally get all the concerns related to AI. However, the bandwagon of: "look it made a mistake, it's useless!" is a bit silly.
First of all, AI is constantly improving. Remember everyone laughing at AI's mangled fingers? Well, that has been fixed some time ago. Now pictures of people are pretty much indistinguishable from real ones.
Second, people also make critical mistakes, plenty at that. The question is not whether AI can be absolutely accurate. The question is whether AI can make on average fewer mistakes than human.
I hate the idea of AI replacing everything and everyone. However, pretending that AI will not be eventually faster, better, cheeper and more accurate that most humans is wishful thinking. I honestly think that our only hope is legislation, not the desperate wish that AI will always need human supervision and input to be correct.
-
It’s an obsolete usage of “beg”
It's a misuse of the cliche "begs the question" (which goes back to medieval Latin petitio principii) which is used to call out a form of fallacious reasoning where the desired answer is smuggled into the assumptions. And yeah, that use of "beg" is obsolete, but even worse, the whole phrase is now misused to mean "prompts the question."
-
the whole phrase is now misused
-
You put a few GPTs in a trenchcoat and they're obviously AI. I can't speak about openAIs offerings since I won't use it as a cloud service, but local deepseek I've tried is certainly AI. People are moving the goalposts constantly, with what seems to me a determination to avoid seeing the future that's already here. Download deepseek-v2-coder 16b if you have 16GB of ram and 10gb of storage space and see for yourselves, it's ridiculously low requirements for what it can do, it uses 50% of four cpu cores for like 15 seconds to solve a problem with detailed reasoning steps.
-
there's also the problem of techbros and companies everywhere thinking that AI is omniscient and can replace every other profession. who needs a human journalist when you can train an AI on their work (because they work for you and their work is your property ofc) and then just fire them all because you have a perfect AI that you can just set to run forever without checkig its work and make infinite money
-
And then the articles will only be clicked and commented by bots after a while. Dead internet here we come!