Is It Just Me?
-
Yeah it definitely has its uses. OP wasn't saying it's never useful, I think you may have missed the forest for the trees.
uhm no I'm pretty sure op wouldn't approve judging by the:
"but you can-" I'm gonna lose it
-
This post did not contain any content.
Reminder that these people existed when the radio was invented.
You can't put everything back in Pandoras box but amid all the negative you latch on to there's a sliver of positivity and we have to protect that.
Anyone who is anti-AI, I get it. You need to understand there's no going back. We can help control the path forward.
-
I do agree with them that stealing may not be the right word, isn't plagiarism more accurate? But plagiarism is generally considered theft so it probably doesn't matter. I just found it really interesting that I personally haven't really given much thought to the semantics of theft when no physical object is involved even though it's been discussed for like centuries atp
Plagiarism isn't correct either. For something to be plagiarism it needs to be both copied exactly as well as intentionally lying about the authorship (i.e. you claim you wrote something that you didn't).
The output of Large Language Models similar in some ways to plagiarism—when someone claims they wrote something that was actually just the output of an LLM. However, that really isn't the same thing because an LLM isn't a legal entity that's capable of owning anything.
LLMs are also just a tool. An advanced tool that can generate all sorts of texts and software but they still require a human to tell them what to do.
If some human asks ChatGPT to write something in the style of Stephen King what even is that? That's not against the law (you can't copyright a writing style). It's basically, "not a thing." Is that even a bad thing? I honestly don't think so because of I put myself in those same shoes: "write a comment in the style of Riskable" all I can do is
. It's of no consequence.
I'd also argue that it's of no consequence to authors either. What impact does it have on them? None. It doesn't effect their book/whatever sales. It doesn't hurt the market for their works—if anything, it makes the market for their works greater because their works won't be total shit like the output of some LLM (LOL).
-
When i was a kid and firat realized i was maybe a genius, it was terrifying. That there weren't always gonna just be people smarter than me who could fix it.
Seeing them get dumber is like some horror movie shit.
I don't fancy myself a genius but the way other people navigate things seems to create a strangely compelling case on its own
-
My boss had GPT make this informational poster thing for work. Its supposed to explain stuff to customers and is rampant with spelling errors and garbled text. I pointed it out to the boss and she said it was good enough for people to read. My eye twitches every time I see it.
good enough for people to read
wow, what a standard, super professional look for your customers!
-
Meanwhile, we have people making the web worse by not linking to source & giving us images of text instead of proper, accessible, searchable, failure tolerant text.
- OpenAI Text Crawler
wrote last edited by [email protected]You don't think the disabled use technology?
Or that search engine optimization existed before LLMs?
Or that text sticks around when images break?Lack of accessibility wouldn't stop LLMs: it could probably process images into text the hard way & waste more energy in the process.
That'd be great, right?-
A hyphen isn't a quotation dash.
-
Are we playing the AI game? Let’s pretend we’re AI.
Here's some fun punctuation:‒−–—―…:
Beep bip boop.
-
-
This post did not contain any content.
No line breaks and capitalization? Can somebody ask AI to format it properly, please?
-
Reminder that these people existed when the radio was invented.
You can't put everything back in Pandoras box but amid all the negative you latch on to there's a sliver of positivity and we have to protect that.
Anyone who is anti-AI, I get it. You need to understand there's no going back. We can help control the path forward.
I think the big difference is the first public radio broadcast was in 1906, regulation started to follow in 1910. Even today you need a license to use the technology.
Machine learning (or ai) has been around for over half a century. Now that it's more abundant in every day life they're trying to pass laws that prohibit regulations of ai.
An argument could probably be made with automobiles, but again those early adopters were heavily regulated.
“The Locomotive Act of 1865,” better known as the “Red Flag Act.” While not technically a ban, it stipulated that, when being operated on public roads, “at least three persons are to be employed to drive or conduct a locomotive” and that one of those people must walk ahead of the vehicle waving a red flag to warn others on the road and to signal the driver when to stop. It also set the top speed on highways to 4mph, and through towns to 2mph. (link)
We can learn from the past, but I don't think we can directly apply or simply hand-wave the fears like it's a previous technology.
-
I think the big difference is the first public radio broadcast was in 1906, regulation started to follow in 1910. Even today you need a license to use the technology.
Machine learning (or ai) has been around for over half a century. Now that it's more abundant in every day life they're trying to pass laws that prohibit regulations of ai.
An argument could probably be made with automobiles, but again those early adopters were heavily regulated.
“The Locomotive Act of 1865,” better known as the “Red Flag Act.” While not technically a ban, it stipulated that, when being operated on public roads, “at least three persons are to be employed to drive or conduct a locomotive” and that one of those people must walk ahead of the vehicle waving a red flag to warn others on the road and to signal the driver when to stop. It also set the top speed on highways to 4mph, and through towns to 2mph. (link)
We can learn from the past, but I don't think we can directly apply or simply hand-wave the fears like it's a previous technology.
See and now it's a productive discussion on the path forward instead of whinging about a new technology.
I can only hope regulation can come its long overdue with a lot of the technology we use it will be interesting to see what will be the impudence to move it forward.
To be clear I'm not disagreeing with anything you've said.
-
What if the point of AI is to have it create a personal model for each of us, using the vast amounts of our data they have access to, in order to manipulate us into buying and doing whatever the people who own it want but they can't just come out and say that?
I'm sure that's at least part of the idea but I'm yet to see any evidence that it won't also be dog shit at that. It doesn't have the context window or foresight to conceive of a decent plot twist in a piece of fiction despite having access to every piece of fiction ever written. I'm not buying that it would be able to build a psychological model and contextualize 40 plus years of lived experience in a way that could get me to buy a $20 Dubai chocolate bar or drive a Chevy.
-
No line breaks and capitalization? Can somebody ask AI to format it properly, please?
being anti-AI is making me feel like I'm going insane. "You asked for thoughts about your character’s backstory and I put it into ChatGPT for ideas." Studies have proven it’s making people dumber. "I asked AI to generate this meal plan." It’s causing water shortages where its data centers are built. "I’ll generate some pictures for the DnD campaign." It’s spreading misinformation. "Meta, generate an image of this guy doing something stupid." It’s trained off stolen images, writing, video, audio. "I was talking with my Snapchat AI." There’s no way to verify what it’s doing with the information it collects. "YouTube is implementing AI-based age verification." My work has an entire graphics media department and has still put AI-generated motivational posters up everywhere. AI playlists. AI facial verification. Google AI. Microsoft AI. Meta AI. Snapchat AI.
Everyone treats it as a novelty. Everyone treats it as a mandatory part of life. Am I the only one who sees it? Am I paranoid? Am I going insane? Jesus fucking Christ.
If I have to hear one more "Well at least—", "But it does—", "But you can—" I’m about to lose it.
I shouldn’t have to jump through hoops to avoid the evil machine. Have you no principles? No goddamn spine? Am I the weird one here?
Still shoddy.
-
Yeah it definitely has its uses. OP wasn't saying it's never useful, I think you may have missed the forest for the trees.
wrote last edited by [email protected]The whole premise is about avoiding it at all costs and that being difficult to do. Where in that ranty wall is a statement about the utility of AI?
-
This post did not contain any content.wrote last edited by [email protected]
Luckily, hating Tesla and AI are now personality trait, right, wrong, or indifferent.
-
Reminder that these people existed when the radio was invented.
You can't put everything back in Pandoras box but amid all the negative you latch on to there's a sliver of positivity and we have to protect that.
Anyone who is anti-AI, I get it. You need to understand there's no going back. We can help control the path forward.
That’s a bit of a false analogy because radio never threatened to take away millions of people’s livelihoods.
A more apt comparison would be the actual Luddites during the Industrial Revolution who smashed machines because massive amounts of people were being turned off the land and their traditional economic activities were unable to compete with machine based production.
People don’t just hate AI because it’s new, they hate it because it will condemn millions of people to poverty while making a handful of rich people even more rich.
-
And suddenly we’re talking about locally hosted models I guess? You’re just contorting your argument and cherry picking the single part of the deeply problematic technology you could possibly defend. It’s disingenuous and you’re just poorly covering it up with condescension.
Yeah, and electricity can kill you and start fires, that's why we only use it in controlled equipment. Bleach can kill you too, that's why I only defend it to clean the house. I also only defend knifes to chop vegetables, not to murder babies. What a dumb argument: "you only defend technology X for its good uses, and not for killing kittens".
-
And Google would never lie about how much energy a prompt costs, right?
Especially not since they have an invested interest in having people use their AI products, right?
... They're kind of governed by law about what things they're allowed to tell their stockholders.
And before you try to say otherwise, yes, laws that protect the ownership class are still being enforced.
-
Yeah, and electricity can kill you and start fires, that's why we only use it in controlled equipment. Bleach can kill you too, that's why I only defend it to clean the house. I also only defend knifes to chop vegetables, not to murder babies. What a dumb argument: "you only defend technology X for its good uses, and not for killing kittens".
The post isn’t about locally hosted models amigo.
-
This post did not contain any content.
My hope is that the ai bubble/trend might have a silver lining overall.
I’m hoping that people start realizing that it is often confidently incorrect. That while it makes some tasks faster, a person will still need to vet the answers.
Here’s the stretch. My hope is that by questioning and researching to verify the answers ai is giving them, people start applying this same skepticism to their daily lives to help filter out all the noise and false information that is getting shoved down their throats every minute of every day.
So that the populace in general can become more resistant to the propaganda. AI would effectively be a vaccine to boost our herd immunity to BS.
Like I said. It’s a hope.
-
Why does profit matter? I don't personally give a shit about the longevity, margins or market share of any company invested in the technology, but I am generally in favor of research and development of any technology. In most research it's hard to predict the future applications.
That's not to say development is always smart, or safe, or ethical, just that it has to happen in order to see where this goes. Even if there's an end point it's helpful to know where it is.
Unfortunately, capitalism requires a sacrifice to the economy in order to pursue anything. That's what sucks about this. If we weren't hard wired to justify existence in capital there wouldn't be so much occlusive hype around it.
Why does profit matter? . . . but I am generally in favor of research and development of any technology. In most research it's hard to predict the future applications.
Answered your own question there.
If we weren't hard wired to justify existence in capital there wouldn't be so much occlusive hype around it.
Can't argue with that. It is almost entirely a cash grab that is astonishing in its overreach and astounding in its apparent failure.
-
One thing I don't get with people fearing AI is when something adds AI and suddenly it's a privacy nightmare. Yeah, in some cases it does make it worse, but in most cases, what was stopping the company from taking your data anyways? LLMs are just algorithms that process data and output something, they don't inherently give firms any additional data. Now, in some cases that means data that previously wasn't or that shouldn't be sent to a server is now being sent, but I've seen people complain about privacy so often in cases where I don't understand why AI is your tipping point, if you don't trust the company to not store your data when using AI, why trust it in the first place?
if you don't trust the company to not store your data when using AI, why trust it in the first place?
Policies, procedures, and common sense - three things AI is most assuredly not known for respecting. (Not that the whole topic of data privacy isn't a huge issue outside of AI)