Is It Just Me?
-
This post did not contain any content.
I try use it to pitch ideas for writing (no prose because fuck almighty) to help fill in ideas or aspects I did not think about. But it just keeps coming up with shit I don't use and so I just use it for validation and encouragement.
I got a pretty good layout for a new season of Magic School Bus where Friz loses her mind and decides to be the history teacher.
-
The Luddites were right. Maybe we can learn a thing or two from them...
I had to down load the facebook app to delete my account. Unfortunately I think the Luddites are going to be sent to the camps in a few years.
-
It's a tool being used by humans.
It's not making anyone dumber or smarter.
I'm so tired of this anti ai bullshit.
Ai was used in the development of the COVID vaccine. It was crucial in its creation.
But just for a second let's use guns as an example instead of ai. Guns kill people. Lemmy is anti gun, mostly. Yet Lemmy is pro Ukraine, mostly, and y'all supports the Ukrainians using guns to defend themselves.
Or cars, generally cars suck yet we use them as transport.
These are just tools they're as good and as bad as the people using them.
So yes, it is just you and a select few smooth brains that can't see past their own bias.
The term AI, when used by laymen, is a blanket term for the generative AI and LLMs that big tech is shoving down all our throats right now, not the highly specialized AIs used in medicine. So bringing up the COVID vaccine is largely a non-sequitur.
The rest of your comment is so full of false equivalencies that I'm not even gonna touch it.
-
Did you notice the alt text?
Yeah, and I noticed it didn't describe the image at all - unless one had already seen the image and knew what it was. So for visually impaired users (i.e. one of the main groups who would benefit from alt text) it is insufficient at best.
Griping over AI, however, isn't adding anything that isn't posted frequently around here
Specific to the OP the issue is those of us who know gen-AI is an enormous piece of shit with only downsides for things we care about like culture and learning, we might feel like we're going a little crazy in a culture that only seems to be able to share love for it in public places like work. Even public criticism of it has been limited to economic and ecological harms. I haven't seen that particular angle before very much, and as someone else posted here I felt recognized by it.
wrote last edited by [email protected]Yeah, and I noticed it didn’t describe the image at all
How would you state it over the phone?
Alt text is a succinct alternative that conveys (accurate/equivalent) meaning in context, much like reading a comment with an image to someone over the phone.
If you would have said that "Simpsons meme of an old man yelling at a cloud", then that would also suffice.
It doesn't need to go into elaborate detail.In those discussions, people often talk about having enough, losing their minds, it making people dumber, too.
I get it helps to feel recognized, so would it feel better to broaden the reach of that message for more recognition? -
Spelling errors? That’s… unusual. Part of what makes ChatGPT so specious is that its output is usually immaculate in terms of language correctness, which superficially conceals the fact that it’s completely bullshitting on the actual content.
FWIW, she asked it to make a complete info-graphic style poster with images and stuff so GPT created an image with text, not a document. Still asinine.
-
Yeah but LLMs don't train off of data automatically, you need a separate dedicated process for that, it won't happen from just using them. In that sense, companies can still use your data to train them in the background, even if you aren't directly using an LLM, or they can not train them even when you are using them. I guess in the latter case there is a bigger incentive for them to train them than otherwise, but to me it seems basically the same thing privacy wise.
If they're exposing their LLM to the public, there's a higher chance of it leaking training data to the public. You don't know what they trained with, but there's a chance it's customer data. Sure they may not train with anything, but why assume they don't? If they have an internal LLM that's of lesser concern, because that LLM would probably only show them data those employees already have access to.
-
The term AI, when used by laymen, is a blanket term for the generative AI and LLMs that big tech is shoving down all our throats right now, not the highly specialized AIs used in medicine. So bringing up the COVID vaccine is largely a non-sequitur.
The rest of your comment is so full of false equivalencies that I'm not even gonna touch it.
The highly specialized "ai" that was used to make the COVID vaccine is probably significantly stupider than chatgpt two years ago.
Nothing I said was a false equivalency, though I'm pretty sure you don't even know what a false equivalent is.
-
It's a tool being used by humans.
It's not making anyone dumber or smarter.
I'm so tired of this anti ai bullshit.
Ai was used in the development of the COVID vaccine. It was crucial in its creation.
But just for a second let's use guns as an example instead of ai. Guns kill people. Lemmy is anti gun, mostly. Yet Lemmy is pro Ukraine, mostly, and y'all supports the Ukrainians using guns to defend themselves.
Or cars, generally cars suck yet we use them as transport.
These are just tools they're as good and as bad as the people using them.
So yes, it is just you and a select few smooth brains that can't see past their own bias.
Idk my boss keeps asking some perplexity AI any time you ask him any question instead of either
A) Thinking
Or
B) Researching (he thinks AI is researching. Despite it being proven perplexity has lied to him before.)
In essence, by making it so he doesn't have to think about things or do any research himself, it is making him dumber. Not in the sense of losing actual brain cells (maybe. Remains to be seen.) but in the sense of "whether or not he's physically dumber, his output is, so functionally..."
-
I had to down load the facebook app to delete my account. Unfortunately I think the Luddites are going to be sent to the camps in a few years.
They can try, but Papa Kaczynski lives forever in our hearts.
-
Why not write this with pen and paper?
It trains your brain even more than typing, it is impossible to be used to train any AI, it uses no electricity compared to the massive amounts a computer uses, and I don't have to read your dumb takes.
Seriously, I know corporate bullshit are using AI to do dumb things. But it is a fascinating technology that can do a lot of neat things if applied correctly.
Stop claiming like AI shit in your shoes and fucked your grandma. It isn't going to burst and go away.
and I don't have to read your dumb takes.
Just know that I literally just pulled out a pen, a sheet of paper, and my scanner, and was about to write "yes you do, get rekt" and scan it and post it as a pic in this comment.
But you were saved by me realizing I don't want to upload my handwriting to the internet. Still read this comment tho, get rekt.
-
This post did not contain any content.wrote last edited by [email protected]
We have a lot of suboptimal aspects of our society like animal farming , war, religion etc. and yet this is what breaks this person's brain? It's a bit weird.
I'm genuinely sympathetic to this feeling but AI fears are so overblown and seems to be purely American internet hysteria. We'll absolutely manage this technology especially now that it appears that LLMs are fundamentally limited and will never achieve any form of AGI and even agentic workflow is years away from now.
Some people are really overreacting and everyone's just enabling them.
-
I have a love/ hate relationship. Sometimes I'm absolutely blown away by what it can do. But then I asked a compounded interest question. The first answer was AI, so I figured ok, why not. I should mention I don't know much about it. The answer was impressive. It gave the result, a brief explanation about how it came to the result and presented me with the equation it used. Since I needed it for all time sake, I entered the equation into a spreadsheet and got what I thought was the wrong answer. I spent quite a few minutes trying to figure out what I was doing wrong and found a couple of things. But fixing them still didn't give me the correct result. After I had convinced myself I had done it correctly I looked up the equation. It was the right one. Then I put it into a non-AI calculator online to check my work. Sure enough, the AI had given me the wrong result with the right equation. So be rule, never accept the AI answer with verifying it. But you know what, if you have to verify it, what's the point of using it in the first place? You just have to do the same work as you would without it.
You need to verify all resources though. I have a lot of points on stackexchange and after contributing for almost a decade now I can tell you for a fact that LLM's hallucination issue is not much worse than people hallucination issue. Information exchange will never be perfect.
You get this incredible speed of an answer which means you have a lot of remaining budget to verify it. It's a skill issue.
-
It's a tool being used by humans.
It's not making anyone dumber or smarter.
I'm so tired of this anti ai bullshit.
Ai was used in the development of the COVID vaccine. It was crucial in its creation.
But just for a second let's use guns as an example instead of ai. Guns kill people. Lemmy is anti gun, mostly. Yet Lemmy is pro Ukraine, mostly, and y'all supports the Ukrainians using guns to defend themselves.
Or cars, generally cars suck yet we use them as transport.
These are just tools they're as good and as bad as the people using them.
So yes, it is just you and a select few smooth brains that can't see past their own bias.
You can't dispell irrational thoughts through rational arguments. People hate LLMs because they feel left behind which is an absolutely valid concern but expressed poorly.
-
Let's go a step further and look at why people are in burnout, are overloaded, are working 3 jobs to make ends meet.
Its because we're all slaves to capitalism.
Greed for more profit by any means possible has driven society to the point where we can barely afford to survive and corporations still want more. When most Americans are choosing between eating, their kids eating, or paying rent, while enduring the workload of two to three people, yeah they'll turn to anything that makes life easier. But it shouldn't be this way and until we're no longer slaves we'll continue to make the choices that ease our burden, even if they're extremely harmful in the long run.
wrote last edited by [email protected]I read it as "eating their kids". I am an overworked slave.
-
We shouldn't accuse people of moral failings. That's inaccurate and obfuscates the actual systemic issues and incentives at play.
But people using this for unlicensed therapy are in danger. More often than not LLMs will parrot whatever you give in the prompt.
People have died from AI usage including unlicensed therapy. This would be like the factory farmed meat eating you.
https://www.yahoo.com/news/articles/woman-dies-suicide-using-ai-172040677.html
Maybe more like factory meat giving you food poisoning.
-
The highly specialized "ai" that was used to make the COVID vaccine is probably significantly stupider than chatgpt two years ago.
Nothing I said was a false equivalency, though I'm pretty sure you don't even know what a false equivalent is.
It doesn't really make sense to call ai "stupid". It's just a computer algorithm.
The results can be good or bad depending on what you want to achieve.
Chatgpt can give a bad result if you try to play chess, because the GPT based algorithms are not very good at that. For that problem a tree search-based algorithm is better. E.g. minmax
-
Regulate? This is what lead AI companies are pushing for, they would pass the bureaucracy but not the competitors.
I was referring to this in my comment:
Congress decided to not go through with the AI-law moratorium. Instead they opted to do nothing, which is what AI companies would prefer states would do. Not to mention the pro-AI argument appeals to the judgement of Putin, notorious for being surrounded by yes-men and his own state propaganda. And the genocide of Ukrainians in pursuit of the conquest of Europe.
“There’s growing recognition that the current patchwork approach to regulating AI isn’t working and will continue to worsen if we stay on this path,” OpenAI’s chief global affairs officer, Chris Lehane, wrote on LinkedIn. “While not someone I’d typically quote, Vladimir Putin has said that whoever prevails will determine the direction of the world going forward.”
The shit just needs to be forced to opensource. If you steal the content from entire world to build a thinking machine - give back to the world.
The problem is unlike Robin Hood, AI stole from the people and gave to the rich. The intellectual property of artists and writers were stolen and the only way to give it back is to compensate them, which is currently unlikely to happen. Letting everyone see how the theft machine works under the hood doesn't provide compensation for the usage of that intellectual property.
This would also crash the bubble and would slow down any of the most unethical for-profits.
Not really. It would let more people get it on it. And most tech companies are already in on it. This wouldn't impose any costs on AI development. At this point the speculation is primarily on what comes next. If open source would burst the bubble it would have happened when DeepSeek was released. We're still talking about the bubble bursting in the future so that clearly didn't happen.
Forced opensourcing would totally destroy the profits, cause you spend money on research and then you opensource, so anyone can just grab your model and don't pay you a cent. Where would the profits come from?
IP of writers
I mean yes, and? AI is still shitty at creative writing. Unlike with images, it's not like people oneshot a decent book.
give it to the rich
We should push to make high-vram devices accessible. This is literally about means of production - we should fight for equal access. Regulation is the reverse of that, give those megacorps the unique ability to run it because others are too stupid to control it.
OpenAI
They were the most notorious proponents of the regulations. Lots of talks with openai devs where they just doomsay about the dangers of AGI, and how it must be top secret controlled by govs.
-
Forced opensourcing would totally destroy the profits, cause you spend money on research and then you opensource, so anyone can just grab your model and don't pay you a cent. Where would the profits come from?
IP of writers
I mean yes, and? AI is still shitty at creative writing. Unlike with images, it's not like people oneshot a decent book.
give it to the rich
We should push to make high-vram devices accessible. This is literally about means of production - we should fight for equal access. Regulation is the reverse of that, give those megacorps the unique ability to run it because others are too stupid to control it.
OpenAI
They were the most notorious proponents of the regulations. Lots of talks with openai devs where they just doomsay about the dangers of AGI, and how it must be top secret controlled by govs.
Forced opensourcing would totally destroy the profits, cause you spend money on research and then you opensource, so anyone can just grab your model and don’t pay you a cent.
DeepSeek was released
The profits were not destroyed.
Where would the profits come from?
At this point the speculation is primarily on what comes next.
People are betting on what they think LLMs will be able to do in the future, not what they do now.
I mean yes, and?
It's theft. They stole the work of writers and all sorts of content creators. That's the wrong that needs to be righted. Not how to reproduce the crime. The only way to right intellectual property theft is to pay the owner of that intellectual property the money they would have gotten if they had willingly leased it out as part of a deal. Corporations, like Nintendo, Disney, and Hasbro, hound people who do anything unapproved with their intellectual property. The idea that we're yes anding the intellectual property of all humanity is laughable in a discussion supposedly about ethics.
We should push to make high-vram devices accessible.
That's a whole other topic. But what we should fight for now is worker owned corporations. While that is an excellent goal, it isn't helping to undue the theft that was done on its own. It's only allowing more people to profit off that theft. We should also compensate the people who were stolen from if we care about ethics. Also, compensating writers and artists seems like a good reason to take all the money away from the billionaires.
Lots of talks with openai devs where they just doomsay about the dangers of AGI, and how it must be top secret controlled by govs.
OpenAI’s chief global affairs officer, Chris Lehane, wrote on LinkedIn
Looks like the devs aren't in control of the C-Suite. Whoops, all avoidable capitalist driven apocalypses.
-
Forced opensourcing would totally destroy the profits, cause you spend money on research and then you opensource, so anyone can just grab your model and don’t pay you a cent.
DeepSeek was released
The profits were not destroyed.
Where would the profits come from?
At this point the speculation is primarily on what comes next.
People are betting on what they think LLMs will be able to do in the future, not what they do now.
I mean yes, and?
It's theft. They stole the work of writers and all sorts of content creators. That's the wrong that needs to be righted. Not how to reproduce the crime. The only way to right intellectual property theft is to pay the owner of that intellectual property the money they would have gotten if they had willingly leased it out as part of a deal. Corporations, like Nintendo, Disney, and Hasbro, hound people who do anything unapproved with their intellectual property. The idea that we're yes anding the intellectual property of all humanity is laughable in a discussion supposedly about ethics.
We should push to make high-vram devices accessible.
That's a whole other topic. But what we should fight for now is worker owned corporations. While that is an excellent goal, it isn't helping to undue the theft that was done on its own. It's only allowing more people to profit off that theft. We should also compensate the people who were stolen from if we care about ethics. Also, compensating writers and artists seems like a good reason to take all the money away from the billionaires.
Lots of talks with openai devs where they just doomsay about the dangers of AGI, and how it must be top secret controlled by govs.
OpenAI’s chief global affairs officer, Chris Lehane, wrote on LinkedIn
Looks like the devs aren't in control of the C-Suite. Whoops, all avoidable capitalist driven apocalypses.
it's theft
So is all the papers, all the conversations in the internet, all the code etc. So what? Nobody will stop the AI train. You would need Butlerian Jihad type of event to make it happen. In case of any won class action, the repayments would be so laughable nobody would even apply.
Deepseek
Deepseek didn't opensource any proprietary AIs corporations do. I'm talking about forcing OpenAI to opensource all of their AI type of event, or close the company if they don't comply.
betting on the future
Ok, new AI model drops, it's opensource, I download it and run on my rack. Where profits?
-
it's theft
So is all the papers, all the conversations in the internet, all the code etc. So what? Nobody will stop the AI train. You would need Butlerian Jihad type of event to make it happen. In case of any won class action, the repayments would be so laughable nobody would even apply.
Deepseek
Deepseek didn't opensource any proprietary AIs corporations do. I'm talking about forcing OpenAI to opensource all of their AI type of event, or close the company if they don't comply.
betting on the future
Ok, new AI model drops, it's opensource, I download it and run on my rack. Where profits?
wrote last edited by [email protected]So what?
So appealing to ethics was bullshit got it. You just wanted the automated theft tool.
Deepseek
It kept some things hidden but it was the most open source LLM we got.
Ok, new AI model drops, it’s opensource, I download it and run on my rack. Where profits?
The next new AI model that can do the next new thing. The entire economy is based on speculative investments. If you can't improve on the AI model on your machine you're not getting any investor money. edit: typos