Is It Just Me?
-
It's a tool being used by humans.
It's not making anyone dumber or smarter.
I'm so tired of this anti ai bullshit.
Ai was used in the development of the COVID vaccine. It was crucial in its creation.
But just for a second let's use guns as an example instead of ai. Guns kill people. Lemmy is anti gun, mostly. Yet Lemmy is pro Ukraine, mostly, and y'all supports the Ukrainians using guns to defend themselves.
Or cars, generally cars suck yet we use them as transport.
These are just tools they're as good and as bad as the people using them.
So yes, it is just you and a select few smooth brains that can't see past their own bias.
Idk my boss keeps asking some perplexity AI any time you ask him any question instead of either
A) Thinking
Or
B) Researching (he thinks AI is researching. Despite it being proven perplexity has lied to him before.)
In essence, by making it so he doesn't have to think about things or do any research himself, it is making him dumber. Not in the sense of losing actual brain cells (maybe. Remains to be seen.) but in the sense of "whether or not he's physically dumber, his output is, so functionally..."
-
I had to down load the facebook app to delete my account. Unfortunately I think the Luddites are going to be sent to the camps in a few years.
They can try, but Papa Kaczynski lives forever in our hearts.
-
Why not write this with pen and paper?
It trains your brain even more than typing, it is impossible to be used to train any AI, it uses no electricity compared to the massive amounts a computer uses, and I don't have to read your dumb takes.
Seriously, I know corporate bullshit are using AI to do dumb things. But it is a fascinating technology that can do a lot of neat things if applied correctly.
Stop claiming like AI shit in your shoes and fucked your grandma. It isn't going to burst and go away.
and I don't have to read your dumb takes.
Just know that I literally just pulled out a pen, a sheet of paper, and my scanner, and was about to write "yes you do, get rekt" and scan it and post it as a pic in this comment.
But you were saved by me realizing I don't want to upload my handwriting to the internet. Still read this comment tho, get rekt.
-
This post did not contain any content.wrote last edited by [email protected]
We have a lot of suboptimal aspects of our society like animal farming , war, religion etc. and yet this is what breaks this person's brain? It's a bit weird.
I'm genuinely sympathetic to this feeling but AI fears are so overblown and seems to be purely American internet hysteria. We'll absolutely manage this technology especially now that it appears that LLMs are fundamentally limited and will never achieve any form of AGI and even agentic workflow is years away from now.
Some people are really overreacting and everyone's just enabling them.
-
I have a love/ hate relationship. Sometimes I'm absolutely blown away by what it can do. But then I asked a compounded interest question. The first answer was AI, so I figured ok, why not. I should mention I don't know much about it. The answer was impressive. It gave the result, a brief explanation about how it came to the result and presented me with the equation it used. Since I needed it for all time sake, I entered the equation into a spreadsheet and got what I thought was the wrong answer. I spent quite a few minutes trying to figure out what I was doing wrong and found a couple of things. But fixing them still didn't give me the correct result. After I had convinced myself I had done it correctly I looked up the equation. It was the right one. Then I put it into a non-AI calculator online to check my work. Sure enough, the AI had given me the wrong result with the right equation. So be rule, never accept the AI answer with verifying it. But you know what, if you have to verify it, what's the point of using it in the first place? You just have to do the same work as you would without it.
You need to verify all resources though. I have a lot of points on stackexchange and after contributing for almost a decade now I can tell you for a fact that LLM's hallucination issue is not much worse than people hallucination issue. Information exchange will never be perfect.
You get this incredible speed of an answer which means you have a lot of remaining budget to verify it. It's a skill issue.
-
It's a tool being used by humans.
It's not making anyone dumber or smarter.
I'm so tired of this anti ai bullshit.
Ai was used in the development of the COVID vaccine. It was crucial in its creation.
But just for a second let's use guns as an example instead of ai. Guns kill people. Lemmy is anti gun, mostly. Yet Lemmy is pro Ukraine, mostly, and y'all supports the Ukrainians using guns to defend themselves.
Or cars, generally cars suck yet we use them as transport.
These are just tools they're as good and as bad as the people using them.
So yes, it is just you and a select few smooth brains that can't see past their own bias.
You can't dispell irrational thoughts through rational arguments. People hate LLMs because they feel left behind which is an absolutely valid concern but expressed poorly.
-
Let's go a step further and look at why people are in burnout, are overloaded, are working 3 jobs to make ends meet.
Its because we're all slaves to capitalism.
Greed for more profit by any means possible has driven society to the point where we can barely afford to survive and corporations still want more. When most Americans are choosing between eating, their kids eating, or paying rent, while enduring the workload of two to three people, yeah they'll turn to anything that makes life easier. But it shouldn't be this way and until we're no longer slaves we'll continue to make the choices that ease our burden, even if they're extremely harmful in the long run.
wrote last edited by [email protected]I read it as "eating their kids". I am an overworked slave.
-
We shouldn't accuse people of moral failings. That's inaccurate and obfuscates the actual systemic issues and incentives at play.
But people using this for unlicensed therapy are in danger. More often than not LLMs will parrot whatever you give in the prompt.
People have died from AI usage including unlicensed therapy. This would be like the factory farmed meat eating you.
https://www.yahoo.com/news/articles/woman-dies-suicide-using-ai-172040677.html
Maybe more like factory meat giving you food poisoning.
-
The highly specialized "ai" that was used to make the COVID vaccine is probably significantly stupider than chatgpt two years ago.
Nothing I said was a false equivalency, though I'm pretty sure you don't even know what a false equivalent is.
It doesn't really make sense to call ai "stupid". It's just a computer algorithm.
The results can be good or bad depending on what you want to achieve.
Chatgpt can give a bad result if you try to play chess, because the GPT based algorithms are not very good at that. For that problem a tree search-based algorithm is better. E.g. minmax
-
Regulate? This is what lead AI companies are pushing for, they would pass the bureaucracy but not the competitors.
I was referring to this in my comment:
Congress decided to not go through with the AI-law moratorium. Instead they opted to do nothing, which is what AI companies would prefer states would do. Not to mention the pro-AI argument appeals to the judgement of Putin, notorious for being surrounded by yes-men and his own state propaganda. And the genocide of Ukrainians in pursuit of the conquest of Europe.
“There’s growing recognition that the current patchwork approach to regulating AI isn’t working and will continue to worsen if we stay on this path,” OpenAI’s chief global affairs officer, Chris Lehane, wrote on LinkedIn. “While not someone I’d typically quote, Vladimir Putin has said that whoever prevails will determine the direction of the world going forward.”
The shit just needs to be forced to opensource. If you steal the content from entire world to build a thinking machine - give back to the world.
The problem is unlike Robin Hood, AI stole from the people and gave to the rich. The intellectual property of artists and writers were stolen and the only way to give it back is to compensate them, which is currently unlikely to happen. Letting everyone see how the theft machine works under the hood doesn't provide compensation for the usage of that intellectual property.
This would also crash the bubble and would slow down any of the most unethical for-profits.
Not really. It would let more people get it on it. And most tech companies are already in on it. This wouldn't impose any costs on AI development. At this point the speculation is primarily on what comes next. If open source would burst the bubble it would have happened when DeepSeek was released. We're still talking about the bubble bursting in the future so that clearly didn't happen.
Forced opensourcing would totally destroy the profits, cause you spend money on research and then you opensource, so anyone can just grab your model and don't pay you a cent. Where would the profits come from?
IP of writers
I mean yes, and? AI is still shitty at creative writing. Unlike with images, it's not like people oneshot a decent book.
give it to the rich
We should push to make high-vram devices accessible. This is literally about means of production - we should fight for equal access. Regulation is the reverse of that, give those megacorps the unique ability to run it because others are too stupid to control it.
OpenAI
They were the most notorious proponents of the regulations. Lots of talks with openai devs where they just doomsay about the dangers of AGI, and how it must be top secret controlled by govs.
-
Forced opensourcing would totally destroy the profits, cause you spend money on research and then you opensource, so anyone can just grab your model and don't pay you a cent. Where would the profits come from?
IP of writers
I mean yes, and? AI is still shitty at creative writing. Unlike with images, it's not like people oneshot a decent book.
give it to the rich
We should push to make high-vram devices accessible. This is literally about means of production - we should fight for equal access. Regulation is the reverse of that, give those megacorps the unique ability to run it because others are too stupid to control it.
OpenAI
They were the most notorious proponents of the regulations. Lots of talks with openai devs where they just doomsay about the dangers of AGI, and how it must be top secret controlled by govs.
Forced opensourcing would totally destroy the profits, cause you spend money on research and then you opensource, so anyone can just grab your model and don’t pay you a cent.
DeepSeek was released
The profits were not destroyed.
Where would the profits come from?
At this point the speculation is primarily on what comes next.
People are betting on what they think LLMs will be able to do in the future, not what they do now.
I mean yes, and?
It's theft. They stole the work of writers and all sorts of content creators. That's the wrong that needs to be righted. Not how to reproduce the crime. The only way to right intellectual property theft is to pay the owner of that intellectual property the money they would have gotten if they had willingly leased it out as part of a deal. Corporations, like Nintendo, Disney, and Hasbro, hound people who do anything unapproved with their intellectual property. The idea that we're yes anding the intellectual property of all humanity is laughable in a discussion supposedly about ethics.
We should push to make high-vram devices accessible.
That's a whole other topic. But what we should fight for now is worker owned corporations. While that is an excellent goal, it isn't helping to undue the theft that was done on its own. It's only allowing more people to profit off that theft. We should also compensate the people who were stolen from if we care about ethics. Also, compensating writers and artists seems like a good reason to take all the money away from the billionaires.
Lots of talks with openai devs where they just doomsay about the dangers of AGI, and how it must be top secret controlled by govs.
OpenAI’s chief global affairs officer, Chris Lehane, wrote on LinkedIn
Looks like the devs aren't in control of the C-Suite. Whoops, all avoidable capitalist driven apocalypses.
-
Forced opensourcing would totally destroy the profits, cause you spend money on research and then you opensource, so anyone can just grab your model and don’t pay you a cent.
DeepSeek was released
The profits were not destroyed.
Where would the profits come from?
At this point the speculation is primarily on what comes next.
People are betting on what they think LLMs will be able to do in the future, not what they do now.
I mean yes, and?
It's theft. They stole the work of writers and all sorts of content creators. That's the wrong that needs to be righted. Not how to reproduce the crime. The only way to right intellectual property theft is to pay the owner of that intellectual property the money they would have gotten if they had willingly leased it out as part of a deal. Corporations, like Nintendo, Disney, and Hasbro, hound people who do anything unapproved with their intellectual property. The idea that we're yes anding the intellectual property of all humanity is laughable in a discussion supposedly about ethics.
We should push to make high-vram devices accessible.
That's a whole other topic. But what we should fight for now is worker owned corporations. While that is an excellent goal, it isn't helping to undue the theft that was done on its own. It's only allowing more people to profit off that theft. We should also compensate the people who were stolen from if we care about ethics. Also, compensating writers and artists seems like a good reason to take all the money away from the billionaires.
Lots of talks with openai devs where they just doomsay about the dangers of AGI, and how it must be top secret controlled by govs.
OpenAI’s chief global affairs officer, Chris Lehane, wrote on LinkedIn
Looks like the devs aren't in control of the C-Suite. Whoops, all avoidable capitalist driven apocalypses.
it's theft
So is all the papers, all the conversations in the internet, all the code etc. So what? Nobody will stop the AI train. You would need Butlerian Jihad type of event to make it happen. In case of any won class action, the repayments would be so laughable nobody would even apply.
Deepseek
Deepseek didn't opensource any proprietary AIs corporations do. I'm talking about forcing OpenAI to opensource all of their AI type of event, or close the company if they don't comply.
betting on the future
Ok, new AI model drops, it's opensource, I download it and run on my rack. Where profits?
-
it's theft
So is all the papers, all the conversations in the internet, all the code etc. So what? Nobody will stop the AI train. You would need Butlerian Jihad type of event to make it happen. In case of any won class action, the repayments would be so laughable nobody would even apply.
Deepseek
Deepseek didn't opensource any proprietary AIs corporations do. I'm talking about forcing OpenAI to opensource all of their AI type of event, or close the company if they don't comply.
betting on the future
Ok, new AI model drops, it's opensource, I download it and run on my rack. Where profits?
wrote last edited by [email protected]So what?
So appealing to ethics was bullshit got it. You just wanted the automated theft tool.
Deepseek
It kept some things hidden but it was the most open source LLM we got.
Ok, new AI model drops, it’s opensource, I download it and run on my rack. Where profits?
The next new AI model that can do the next new thing. The entire economy is based on speculative investments. If you can't improve on the AI model on your machine you're not getting any investor money. edit: typos
-
We have a lot of suboptimal aspects of our society like animal farming , war, religion etc. and yet this is what breaks this person's brain? It's a bit weird.
I'm genuinely sympathetic to this feeling but AI fears are so overblown and seems to be purely American internet hysteria. We'll absolutely manage this technology especially now that it appears that LLMs are fundamentally limited and will never achieve any form of AGI and even agentic workflow is years away from now.
Some people are really overreacting and everyone's just enabling them.
"yet this is what breaks this person's brain?".
"some people are really overreacting".
Sure this little subset of the internet is aware that LLMs arent going to cut the mustard. But the general population isn't, and that's the problem. Companies are forcing LLMs on staff and customers alike. Someone suggesting that this is being managed appropriately and sustainably is either ill-informed, or intentionally misleading people.
-
"yet this is what breaks this person's brain?".
"some people are really overreacting".
Sure this little subset of the internet is aware that LLMs arent going to cut the mustard. But the general population isn't, and that's the problem. Companies are forcing LLMs on staff and customers alike. Someone suggesting that this is being managed appropriately and sustainably is either ill-informed, or intentionally misleading people.
Meh all of it is very unconvincing. The energy use is quite tiny relative to everything else and in general I dont think energy a problem we should be solving with usage reduction. We can have more than enough green energy if we want to.
-
We have a lot of suboptimal aspects of our society like animal farming , war, religion etc. and yet this is what breaks this person's brain? It's a bit weird.
I'm genuinely sympathetic to this feeling but AI fears are so overblown and seems to be purely American internet hysteria. We'll absolutely manage this technology especially now that it appears that LLMs are fundamentally limited and will never achieve any form of AGI and even agentic workflow is years away from now.
Some people are really overreacting and everyone's just enabling them.
I think framing them as "fears" is dishonest.
-
Meh all of it is very unconvincing. The energy use is quite tiny relative to everything else and in general I dont think energy a problem we should be solving with usage reduction. We can have more than enough green energy if we want to.
Whatever you have to tell yourself
-
My pet peeve: "here's what ChatGPT said..."
No.
Stop.
If I'd wanted to know what the Large Lying Machine said, I would've asked it.
wrote last edited by [email protected]"Here's me telling everyone that I have no critical thinking ability whatsoever."
Is more like it
-
You need to verify all resources though. I have a lot of points on stackexchange and after contributing for almost a decade now I can tell you for a fact that LLM's hallucination issue is not much worse than people hallucination issue. Information exchange will never be perfect.
You get this incredible speed of an answer which means you have a lot of remaining budget to verify it. It's a skill issue.
LLM’s hallucination issue is not much worse than people hallucination issue.
Is this supposed to be comforting?
-
you asked for thoughts about your character backstory and i put it into chat gpt for ideas
If I want ideas from ChatGPT, I could just ask it myself. Usually, if I'm reaching out to ask people's opinions, I want, you know, their opinions. I don't even care if I hear nothing back from them for ages, I just want their input.
wrote last edited by [email protected]"I just fed your private, unpublished intellectual property into black box owned by billionaires. You're welcome."