Is It Just Me?
-
It's a tool being used by humans.
It's not making anyone dumber or smarter.
I'm so tired of this anti ai bullshit.
Ai was used in the development of the COVID vaccine. It was crucial in its creation.
But just for a second let's use guns as an example instead of ai. Guns kill people. Lemmy is anti gun, mostly. Yet Lemmy is pro Ukraine, mostly, and y'all supports the Ukrainians using guns to defend themselves.
Or cars, generally cars suck yet we use them as transport.
These are just tools they're as good and as bad as the people using them.
So yes, it is just you and a select few smooth brains that can't see past their own bias.
You can't dispell irrational thoughts through rational arguments. People hate LLMs because they feel left behind which is an absolutely valid concern but expressed poorly.
-
Let's go a step further and look at why people are in burnout, are overloaded, are working 3 jobs to make ends meet.
Its because we're all slaves to capitalism.
Greed for more profit by any means possible has driven society to the point where we can barely afford to survive and corporations still want more. When most Americans are choosing between eating, their kids eating, or paying rent, while enduring the workload of two to three people, yeah they'll turn to anything that makes life easier. But it shouldn't be this way and until we're no longer slaves we'll continue to make the choices that ease our burden, even if they're extremely harmful in the long run.
wrote last edited by [email protected]I read it as "eating their kids". I am an overworked slave.
-
We shouldn't accuse people of moral failings. That's inaccurate and obfuscates the actual systemic issues and incentives at play.
But people using this for unlicensed therapy are in danger. More often than not LLMs will parrot whatever you give in the prompt.
People have died from AI usage including unlicensed therapy. This would be like the factory farmed meat eating you.
https://www.yahoo.com/news/articles/woman-dies-suicide-using-ai-172040677.html
Maybe more like factory meat giving you food poisoning.
-
The highly specialized "ai" that was used to make the COVID vaccine is probably significantly stupider than chatgpt two years ago.
Nothing I said was a false equivalency, though I'm pretty sure you don't even know what a false equivalent is.
It doesn't really make sense to call ai "stupid". It's just a computer algorithm.
The results can be good or bad depending on what you want to achieve.
Chatgpt can give a bad result if you try to play chess, because the GPT based algorithms are not very good at that. For that problem a tree search-based algorithm is better. E.g. minmax
-
Regulate? This is what lead AI companies are pushing for, they would pass the bureaucracy but not the competitors.
I was referring to this in my comment:
Congress decided to not go through with the AI-law moratorium. Instead they opted to do nothing, which is what AI companies would prefer states would do. Not to mention the pro-AI argument appeals to the judgement of Putin, notorious for being surrounded by yes-men and his own state propaganda. And the genocide of Ukrainians in pursuit of the conquest of Europe.
“There’s growing recognition that the current patchwork approach to regulating AI isn’t working and will continue to worsen if we stay on this path,” OpenAI’s chief global affairs officer, Chris Lehane, wrote on LinkedIn. “While not someone I’d typically quote, Vladimir Putin has said that whoever prevails will determine the direction of the world going forward.”
The shit just needs to be forced to opensource. If you steal the content from entire world to build a thinking machine - give back to the world.
The problem is unlike Robin Hood, AI stole from the people and gave to the rich. The intellectual property of artists and writers were stolen and the only way to give it back is to compensate them, which is currently unlikely to happen. Letting everyone see how the theft machine works under the hood doesn't provide compensation for the usage of that intellectual property.
This would also crash the bubble and would slow down any of the most unethical for-profits.
Not really. It would let more people get it on it. And most tech companies are already in on it. This wouldn't impose any costs on AI development. At this point the speculation is primarily on what comes next. If open source would burst the bubble it would have happened when DeepSeek was released. We're still talking about the bubble bursting in the future so that clearly didn't happen.
Forced opensourcing would totally destroy the profits, cause you spend money on research and then you opensource, so anyone can just grab your model and don't pay you a cent. Where would the profits come from?
IP of writers
I mean yes, and? AI is still shitty at creative writing. Unlike with images, it's not like people oneshot a decent book.
give it to the rich
We should push to make high-vram devices accessible. This is literally about means of production - we should fight for equal access. Regulation is the reverse of that, give those megacorps the unique ability to run it because others are too stupid to control it.
OpenAI
They were the most notorious proponents of the regulations. Lots of talks with openai devs where they just doomsay about the dangers of AGI, and how it must be top secret controlled by govs.
-
Forced opensourcing would totally destroy the profits, cause you spend money on research and then you opensource, so anyone can just grab your model and don't pay you a cent. Where would the profits come from?
IP of writers
I mean yes, and? AI is still shitty at creative writing. Unlike with images, it's not like people oneshot a decent book.
give it to the rich
We should push to make high-vram devices accessible. This is literally about means of production - we should fight for equal access. Regulation is the reverse of that, give those megacorps the unique ability to run it because others are too stupid to control it.
OpenAI
They were the most notorious proponents of the regulations. Lots of talks with openai devs where they just doomsay about the dangers of AGI, and how it must be top secret controlled by govs.
Forced opensourcing would totally destroy the profits, cause you spend money on research and then you opensource, so anyone can just grab your model and don’t pay you a cent.
DeepSeek was released
The profits were not destroyed.
Where would the profits come from?
At this point the speculation is primarily on what comes next.
People are betting on what they think LLMs will be able to do in the future, not what they do now.
I mean yes, and?
It's theft. They stole the work of writers and all sorts of content creators. That's the wrong that needs to be righted. Not how to reproduce the crime. The only way to right intellectual property theft is to pay the owner of that intellectual property the money they would have gotten if they had willingly leased it out as part of a deal. Corporations, like Nintendo, Disney, and Hasbro, hound people who do anything unapproved with their intellectual property. The idea that we're yes anding the intellectual property of all humanity is laughable in a discussion supposedly about ethics.
We should push to make high-vram devices accessible.
That's a whole other topic. But what we should fight for now is worker owned corporations. While that is an excellent goal, it isn't helping to undue the theft that was done on its own. It's only allowing more people to profit off that theft. We should also compensate the people who were stolen from if we care about ethics. Also, compensating writers and artists seems like a good reason to take all the money away from the billionaires.
Lots of talks with openai devs where they just doomsay about the dangers of AGI, and how it must be top secret controlled by govs.
OpenAI’s chief global affairs officer, Chris Lehane, wrote on LinkedIn
Looks like the devs aren't in control of the C-Suite. Whoops, all avoidable capitalist driven apocalypses.
-
Forced opensourcing would totally destroy the profits, cause you spend money on research and then you opensource, so anyone can just grab your model and don’t pay you a cent.
DeepSeek was released
The profits were not destroyed.
Where would the profits come from?
At this point the speculation is primarily on what comes next.
People are betting on what they think LLMs will be able to do in the future, not what they do now.
I mean yes, and?
It's theft. They stole the work of writers and all sorts of content creators. That's the wrong that needs to be righted. Not how to reproduce the crime. The only way to right intellectual property theft is to pay the owner of that intellectual property the money they would have gotten if they had willingly leased it out as part of a deal. Corporations, like Nintendo, Disney, and Hasbro, hound people who do anything unapproved with their intellectual property. The idea that we're yes anding the intellectual property of all humanity is laughable in a discussion supposedly about ethics.
We should push to make high-vram devices accessible.
That's a whole other topic. But what we should fight for now is worker owned corporations. While that is an excellent goal, it isn't helping to undue the theft that was done on its own. It's only allowing more people to profit off that theft. We should also compensate the people who were stolen from if we care about ethics. Also, compensating writers and artists seems like a good reason to take all the money away from the billionaires.
Lots of talks with openai devs where they just doomsay about the dangers of AGI, and how it must be top secret controlled by govs.
OpenAI’s chief global affairs officer, Chris Lehane, wrote on LinkedIn
Looks like the devs aren't in control of the C-Suite. Whoops, all avoidable capitalist driven apocalypses.
it's theft
So is all the papers, all the conversations in the internet, all the code etc. So what? Nobody will stop the AI train. You would need Butlerian Jihad type of event to make it happen. In case of any won class action, the repayments would be so laughable nobody would even apply.
Deepseek
Deepseek didn't opensource any proprietary AIs corporations do. I'm talking about forcing OpenAI to opensource all of their AI type of event, or close the company if they don't comply.
betting on the future
Ok, new AI model drops, it's opensource, I download it and run on my rack. Where profits?
-
it's theft
So is all the papers, all the conversations in the internet, all the code etc. So what? Nobody will stop the AI train. You would need Butlerian Jihad type of event to make it happen. In case of any won class action, the repayments would be so laughable nobody would even apply.
Deepseek
Deepseek didn't opensource any proprietary AIs corporations do. I'm talking about forcing OpenAI to opensource all of their AI type of event, or close the company if they don't comply.
betting on the future
Ok, new AI model drops, it's opensource, I download it and run on my rack. Where profits?
wrote last edited by [email protected]So what?
So appealing to ethics was bullshit got it. You just wanted the automated theft tool.
Deepseek
It kept some things hidden but it was the most open source LLM we got.
Ok, new AI model drops, it’s opensource, I download it and run on my rack. Where profits?
The next new AI model that can do the next new thing. The entire economy is based on speculative investments. If you can't improve on the AI model on your machine you're not getting any investor money. edit: typos
-
We have a lot of suboptimal aspects of our society like animal farming , war, religion etc. and yet this is what breaks this person's brain? It's a bit weird.
I'm genuinely sympathetic to this feeling but AI fears are so overblown and seems to be purely American internet hysteria. We'll absolutely manage this technology especially now that it appears that LLMs are fundamentally limited and will never achieve any form of AGI and even agentic workflow is years away from now.
Some people are really overreacting and everyone's just enabling them.
"yet this is what breaks this person's brain?".
"some people are really overreacting".
Sure this little subset of the internet is aware that LLMs arent going to cut the mustard. But the general population isn't, and that's the problem. Companies are forcing LLMs on staff and customers alike. Someone suggesting that this is being managed appropriately and sustainably is either ill-informed, or intentionally misleading people.
-
"yet this is what breaks this person's brain?".
"some people are really overreacting".
Sure this little subset of the internet is aware that LLMs arent going to cut the mustard. But the general population isn't, and that's the problem. Companies are forcing LLMs on staff and customers alike. Someone suggesting that this is being managed appropriately and sustainably is either ill-informed, or intentionally misleading people.
Meh all of it is very unconvincing. The energy use is quite tiny relative to everything else and in general I dont think energy a problem we should be solving with usage reduction. We can have more than enough green energy if we want to.
-
We have a lot of suboptimal aspects of our society like animal farming , war, religion etc. and yet this is what breaks this person's brain? It's a bit weird.
I'm genuinely sympathetic to this feeling but AI fears are so overblown and seems to be purely American internet hysteria. We'll absolutely manage this technology especially now that it appears that LLMs are fundamentally limited and will never achieve any form of AGI and even agentic workflow is years away from now.
Some people are really overreacting and everyone's just enabling them.
I think framing them as "fears" is dishonest.
-
Meh all of it is very unconvincing. The energy use is quite tiny relative to everything else and in general I dont think energy a problem we should be solving with usage reduction. We can have more than enough green energy if we want to.
Whatever you have to tell yourself
-
My pet peeve: "here's what ChatGPT said..."
No.
Stop.
If I'd wanted to know what the Large Lying Machine said, I would've asked it.
wrote last edited by [email protected]"Here's me telling everyone that I have no critical thinking ability whatsoever."
Is more like it
-
You need to verify all resources though. I have a lot of points on stackexchange and after contributing for almost a decade now I can tell you for a fact that LLM's hallucination issue is not much worse than people hallucination issue. Information exchange will never be perfect.
You get this incredible speed of an answer which means you have a lot of remaining budget to verify it. It's a skill issue.
LLM’s hallucination issue is not much worse than people hallucination issue.
Is this supposed to be comforting?
-
you asked for thoughts about your character backstory and i put it into chat gpt for ideas
If I want ideas from ChatGPT, I could just ask it myself. Usually, if I'm reaching out to ask people's opinions, I want, you know, their opinions. I don't even care if I hear nothing back from them for ages, I just want their input.
wrote last edited by [email protected]"I just fed your private, unpublished intellectual property into black box owned by billionaires. You're welcome."
-
Meh all of it is very unconvincing. The energy use is quite tiny relative to everything else and in general I dont think energy a problem we should be solving with usage reduction. We can have more than enough green energy if we want to.
facts tend to be unconvincing when you consider fantasies like "LLMs are being powered by green energy" a reality.
-
We have a lot of suboptimal aspects of our society like animal farming , war, religion etc. and yet this is what breaks this person's brain? It's a bit weird.
I'm genuinely sympathetic to this feeling but AI fears are so overblown and seems to be purely American internet hysteria. We'll absolutely manage this technology especially now that it appears that LLMs are fundamentally limited and will never achieve any form of AGI and even agentic workflow is years away from now.
Some people are really overreacting and everyone's just enabling them.
Lemmy is a lost cause for nuanced takes on "AI". It's all just rage now.
-
It's important to remember that there's a lot of money being put into A.I. and therefore a lot of propaganda about it.
This happened with a lot of shitty new tech, and A.I. is one of the biggest examples of this I've known about.
All I can write is that, if you know what kind of tech you want and it's satisfactory, just stick to that. That's what I do.
Don't let ads get to you.First post on a lemmy server, by the way. Hello!
Welcome in! Hope you're finding Lemmy in a positive way. It's like Reddit, but you have a lot more control over what you can block and where you can make a "home" (aka home instance).
Feel free to reach out if you have any questions about anything
-
The highly specialized "ai" that was used to make the COVID vaccine is probably significantly stupider than chatgpt two years ago.
Nothing I said was a false equivalency, though I'm pretty sure you don't even know what a false equivalent is.
Trying to compare the intelligence of a specialized, single purpose AI to an LLM is asinine, and shows you don't really know what you're talking about. Just like how it's asinine to equate a technology that pervades every facet of our lives, personal and professional, without our consent or control, to cars and guns.
-
I find it telling that the best rebuttal anyone can come up with to my comment is to say it's a "shit take."
I mean, wow.