Is It Just Me?
-
This post did not contain any content.
-
It did help me make a basic script and add it to task scheduler so it runs and fixes my broken WiFi card so I don't have to manually do it. (or better said, helped me avoid asking arrogant people that feel smug when I tell them I haven't opened a command prompt in ten years)
I feel like I would have been able to do that easily 10 years ago, because search engines worked, and the 'web wasn't full of garbage. I reckon I'd have near zero chance now.
-
It's important to remember that there's a lot of money being put into A.I. and therefore a lot of propaganda about it.
This happened with a lot of shitty new tech, and A.I. is one of the biggest examples of this I've known about.
All I can write is that, if you know what kind of tech you want and it's satisfactory, just stick to that. That's what I do.
Don't let ads get to you.First post on a lemmy server, by the way. Hello!
There was a quote about how Silicon Valley isn't a fortune teller betting on the future. It's a group of rich assholes that have decided what the future would look like and are pushing technology that will make that future a reality.
Welcome to Lemmy!
-
I'd like to hear more about this because I'm fairly tech savvy and interested in legal nonsense (not American) and haven't heard of it. Obviously, I'll look it up but if you have a particularly good source I'd be grateful.
I have lawyer friends. I've seen snippets of their work lives. It continues to baffle me how much relies on people who don't have the waking hours or physical capabilities to consume and collate that much information somehow understanding it well enough to present a true, comprehensive argument on a deadline.
I do tech support for a few different law firms in my area, and AI companies that offer document ingest and processing are crawling out of every crack.
Some of them are pretty good and they take into account the hallucination issues and offer direct links to the quotes that they're pulling and I mean it's an area where things are developing and soon it should be easier for a lawyer to put on more cases or to charge less per case because being a lawyer becomes so much easier.
-
This post did not contain any content.
Early computers were massive and consumed a lot of electricity. They were heavy, prone to failure, and wildly expensive.
We learned to use transistors and integrated circuits to make them smaller and more affordable. We researched how to manufacture them, how to power them, and how to improve their abilities.
Critics at the time said they were a waste of time and money, and that we should stop sinking resources into them.
-
Early computers were massive and consumed a lot of electricity. They were heavy, prone to failure, and wildly expensive.
We learned to use transistors and integrated circuits to make them smaller and more affordable. We researched how to manufacture them, how to power them, and how to improve their abilities.
Critics at the time said they were a waste of time and money, and that we should stop sinking resources into them.
Even if you accept that LLMs are a necessary, but ultimately disappointing, step on the way to a much more useful technology like AGI there's still a very good argument to be made that we should stop investing in it now.
I'm not talking about the "AI is going to enslave humanity" theories either. We already have human overlords who have no issues doing exactly that and giving them the technology to make most of us redundant, at the precise moment when human populations are higher than they've ever been, is a recipe for disaster that could make what's happening in Gaza seem like a relaxing vacation. They will have absolutely no problem condemning billions to untold suffering and death if it means they can make a few more dollars.
We need to figure our shit out as a species before we birth that kind of technology or else we're all going to suffer immensely.
-
Probably right, but to be fair it’s “been” quantum computing since the 90’s.
AI has been AI since the 50s.
-
Early computers were massive and consumed a lot of electricity. They were heavy, prone to failure, and wildly expensive.
We learned to use transistors and integrated circuits to make them smaller and more affordable. We researched how to manufacture them, how to power them, and how to improve their abilities.
Critics at the time said they were a waste of time and money, and that we should stop sinking resources into them.
Making machines think for you, badly, is a lot different than having machines do computation with controlled inputs and outputs. LLMs are a dead-end in the hunt of AGI and they actively make us stupider and are killing the planet. There's a lot to fucking hate on.
I do think that generative ai can have its uses, but LLMs are the most cursed thing. The fact that the word guesser has emergent properties is interesting, but we definitely shouldn't be using those properties like this.
-
Even if you accept that LLMs are a necessary, but ultimately disappointing, step on the way to a much more useful technology like AGI there's still a very good argument to be made that we should stop investing in it now.
I'm not talking about the "AI is going to enslave humanity" theories either. We already have human overlords who have no issues doing exactly that and giving them the technology to make most of us redundant, at the precise moment when human populations are higher than they've ever been, is a recipe for disaster that could make what's happening in Gaza seem like a relaxing vacation. They will have absolutely no problem condemning billions to untold suffering and death if it means they can make a few more dollars.
We need to figure our shit out as a species before we birth that kind of technology or else we're all going to suffer immensely.
Even if we had an AGI that gave the steps to fix the world and prevent mass extinction, and generate a solution for the US to stop all wars. It wouldn't make a difference because those in charge simply wouldn't listen to it. In fact, generative AI gives you answers for these peace and slowing down climate change based off real academic work on and those in charge ignore both AI they claim to trust and the scholars who spend their whole lives finding solutions to.
-
This post did not contain any content.
I must be one of the few reaming people that have never, and will never- type a sentence into an AI prompt.
I despise that garbage.
-
It's important to remember that there's a lot of money being put into A.I. and therefore a lot of propaganda about it.
This happened with a lot of shitty new tech, and A.I. is one of the biggest examples of this I've known about.
All I can write is that, if you know what kind of tech you want and it's satisfactory, just stick to that. That's what I do.
Don't let ads get to you.First post on a lemmy server, by the way. Hello!
Reminds me of the way NFTs were pushed. I don’t think any regular person cared about them or used them, it was just astroturfed to fuck.
-
I must be one of the few reaming people that have never, and will never- type a sentence into an AI prompt.
I despise that garbage.
At least knowingly. It seems some customer service stuff feeds it direct to AI before any human gets involved.
-
You are not correct about the energy use of prompts. They are not very energy intensive at all. Training the AI, however, is breaking the power grid.
Sam Altman, or whatever fuck his name is, asked users to stop saying please and thank you to chatgpt because it was costing the company millions. Please and thank you are the less power hungry questions chatgpt gets. And its costing chatgpt millions. Probably 10s of millions of dollars if the CEO made a public comment about it.
You're right training is hella power hungry, but even using gen ai has heavy power costs
-
There was a quote about how Silicon Valley isn't a fortune teller betting on the future. It's a group of rich assholes that have decided what the future would look like and are pushing technology that will make that future a reality.
Welcome to Lemmy!
Classic Torment Nexus moment over and over again really
-
This post did not contain any content.
Not just you. Ai is making people dumber. I am frequently correcting the mistakes of my colleagues that use.
-
Even if we had an AGI that gave the steps to fix the world and prevent mass extinction, and generate a solution for the US to stop all wars. It wouldn't make a difference because those in charge simply wouldn't listen to it. In fact, generative AI gives you answers for these peace and slowing down climate change based off real academic work on and those in charge ignore both AI they claim to trust and the scholars who spend their whole lives finding solutions to.
Yep, those are some of the things we need to figure out before we hand our collective fate to the people in charge. I think we should start by figuring out how to keep shitty people from getting or staying in charge.
-
Me and the homies all hate ai. The only thing people around me seem to use ai for is essentially just snapchat filters. Those people couldn’t muster a single fuck about the harms ai has done though.
The only thing people around me seem to use ai for is essentially code completion, test case development and email summaries. I don't know a single person who uses Snapchat. It's like the world is diverse and tools have uses.
"I hate tunnel boring machines, none of my buddies has an use for a tunnel boring machine, and they are expensive and consume a ton of energy"
-
This post did not contain any content.
My boss had GPT make this informational poster thing for work. Its supposed to explain stuff to customers and is rampant with spelling errors and garbled text. I pointed it out to the boss and she said it was good enough for people to read. My eye twitches every time I see it.
-
I feel like I would have been able to do that easily 10 years ago, because search engines worked, and the 'web wasn't full of garbage. I reckon I'd have near zero chance now.
I actually ended up switching to Kagi for this exact reason. Google is basically AI at the start usually spouting nonsense then sponsor posts and then a bunch of SEO optimized BS.
Thankfully paying for search circumvents the ads and it hasn’t been AI by default (it has it but it’s off) and the results have been generally closer to 2010s Google.
-
I can't take anyone seriously that says it's "trained on stolen images."
Stolen, you say? Well, I guess we're going to have to force those AI companies to put those images back! Otherwise, nobody will be able to see them!
...because that's what "stolen" means. And no, I'm not being pendantic. It's a really fucking important distinction.
The correct term is, "copied" but that doesn't sound quite as severe. Also, if we want to get really specific, the images are presently on the Internet. Right now. Because that's what ImageNET (and similar) is: A database of URLs that point to images that people are offering up for free to anyone that wants on the Internet.
Did you ever upload an image anywhere publicly, for anyone to see? Chances are someone could've annotated it and included it in some AI training database. If it's on the Internet, it will be copied and used without your consent or knowledge. That's the lesson we learned back in the 90s and if you think that's not OK then go try to get hired by the MPAA/RIAA and you can try to bring the world back to the time where you had to pay $10 for a ringtone and pay again if you got a new phone (because—to the big media companies—copying is stealing!).
Now that's clear, let's talk about the ethics of training an AI on such data: There's none. It's an N/A situation! Why? Because until the AI models are actually used for any given purpose they're just data on a computer somewhere.
What about legally? Judges have already ruled in multiple countries that training AI in this way is considered fair use. There's no copyright violation going on... Because copyright only covers distribution of copyrighted works, not what you actually do with them (internally; like training an AI model).
So let's talk about the real problems with AI generators so people can take you seriously:
- Humans using AI models to generate fake nudes of people without their consent.
- Humans using AI models to copy works that are still under copyright.
- Humans using AI models to generate shit-quality stuff for the most minimal effort possible, saying it's good enough, then not hiring an artist to do the same thing.
The first one seems impossible to solve (to me). If someone generates a fake nude and never distributes it... Do we really care? It's like a tree falling in the forest with no one around. If they (or someone else) distribute it though, that's a form of abuse. The act of generating the image was a decision made by a human—not AI. The AI model is just doing what it was told to do.
The second is—again—something a human has to willingly do. If you try hard enough, you can make an AI image model get pretty close to a copyrighted image... But it's not something that is likely to occur by accident. Meaning, the human writing the prompt is the one actively seeking to violate someone's copyright. Then again, it's not really a copyright violation unless they distribute the image.
The third one seems likely to solve itself over time as more and more idiots are exposed for making very poor decisions to just "throw it at the AI" then publish that thing without checking/fixing it. Like Coca Cola's idiotic mistake last Christmas.
Goddamn, I'm stoked I'm not you.