Is It Just Me?
-
This post did not contain any content.
One thing I don't get with people fearing AI is when something adds AI and suddenly it's a privacy nightmare. Yeah, in some cases it does make it worse, but in most cases, what was stopping the company from taking your data anyways? LLMs are just algorithms that process data and output something, they don't inherently give firms any additional data. Now, in some cases that means data that previously wasn't or that shouldn't be sent to a server is now being sent, but I've seen people complain about privacy so often in cases where I don't understand why AI is your tipping point, if you don't trust the company to not store your data when using AI, why trust it in the first place?
-
This post did not contain any content.
It did help me make a basic script and add it to task scheduler so it runs and fixes my broken WiFi card so I don't have to manually do it. (or better said, helped me avoid asking arrogant people that feel smug when I tell them I haven't opened a command prompt in ten years)
-
One thing I don't get with people fearing AI is when something adds AI and suddenly it's a privacy nightmare. Yeah, in some cases it does make it worse, but in most cases, what was stopping the company from taking your data anyways? LLMs are just algorithms that process data and output something, they don't inherently give firms any additional data. Now, in some cases that means data that previously wasn't or that shouldn't be sent to a server is now being sent, but I've seen people complain about privacy so often in cases where I don't understand why AI is your tipping point, if you don't trust the company to not store your data when using AI, why trust it in the first place?
It's more about them feeding it into an LLM which then decides to incorporate it in an answer to some random person.
-
This post did not contain any content.
-
It did help me make a basic script and add it to task scheduler so it runs and fixes my broken WiFi card so I don't have to manually do it. (or better said, helped me avoid asking arrogant people that feel smug when I tell them I haven't opened a command prompt in ten years)
I feel like I would have been able to do that easily 10 years ago, because search engines worked, and the 'web wasn't full of garbage. I reckon I'd have near zero chance now.
-
It's important to remember that there's a lot of money being put into A.I. and therefore a lot of propaganda about it.
This happened with a lot of shitty new tech, and A.I. is one of the biggest examples of this I've known about.
All I can write is that, if you know what kind of tech you want and it's satisfactory, just stick to that. That's what I do.
Don't let ads get to you.First post on a lemmy server, by the way. Hello!
There was a quote about how Silicon Valley isn't a fortune teller betting on the future. It's a group of rich assholes that have decided what the future would look like and are pushing technology that will make that future a reality.
Welcome to Lemmy!
-
I'd like to hear more about this because I'm fairly tech savvy and interested in legal nonsense (not American) and haven't heard of it. Obviously, I'll look it up but if you have a particularly good source I'd be grateful.
I have lawyer friends. I've seen snippets of their work lives. It continues to baffle me how much relies on people who don't have the waking hours or physical capabilities to consume and collate that much information somehow understanding it well enough to present a true, comprehensive argument on a deadline.
I do tech support for a few different law firms in my area, and AI companies that offer document ingest and processing are crawling out of every crack.
Some of them are pretty good and they take into account the hallucination issues and offer direct links to the quotes that they're pulling and I mean it's an area where things are developing and soon it should be easier for a lawyer to put on more cases or to charge less per case because being a lawyer becomes so much easier.
-
This post did not contain any content.
Early computers were massive and consumed a lot of electricity. They were heavy, prone to failure, and wildly expensive.
We learned to use transistors and integrated circuits to make them smaller and more affordable. We researched how to manufacture them, how to power them, and how to improve their abilities.
Critics at the time said they were a waste of time and money, and that we should stop sinking resources into them.
-
Early computers were massive and consumed a lot of electricity. They were heavy, prone to failure, and wildly expensive.
We learned to use transistors and integrated circuits to make them smaller and more affordable. We researched how to manufacture them, how to power them, and how to improve their abilities.
Critics at the time said they were a waste of time and money, and that we should stop sinking resources into them.
Even if you accept that LLMs are a necessary, but ultimately disappointing, step on the way to a much more useful technology like AGI there's still a very good argument to be made that we should stop investing in it now.
I'm not talking about the "AI is going to enslave humanity" theories either. We already have human overlords who have no issues doing exactly that and giving them the technology to make most of us redundant, at the precise moment when human populations are higher than they've ever been, is a recipe for disaster that could make what's happening in Gaza seem like a relaxing vacation. They will have absolutely no problem condemning billions to untold suffering and death if it means they can make a few more dollars.
We need to figure our shit out as a species before we birth that kind of technology or else we're all going to suffer immensely.
-
Probably right, but to be fair it’s “been” quantum computing since the 90’s.
AI has been AI since the 50s.
-
Early computers were massive and consumed a lot of electricity. They were heavy, prone to failure, and wildly expensive.
We learned to use transistors and integrated circuits to make them smaller and more affordable. We researched how to manufacture them, how to power them, and how to improve their abilities.
Critics at the time said they were a waste of time and money, and that we should stop sinking resources into them.
Making machines think for you, badly, is a lot different than having machines do computation with controlled inputs and outputs. LLMs are a dead-end in the hunt of AGI and they actively make us stupider and are killing the planet. There's a lot to fucking hate on.
I do think that generative ai can have its uses, but LLMs are the most cursed thing. The fact that the word guesser has emergent properties is interesting, but we definitely shouldn't be using those properties like this.
-
Even if you accept that LLMs are a necessary, but ultimately disappointing, step on the way to a much more useful technology like AGI there's still a very good argument to be made that we should stop investing in it now.
I'm not talking about the "AI is going to enslave humanity" theories either. We already have human overlords who have no issues doing exactly that and giving them the technology to make most of us redundant, at the precise moment when human populations are higher than they've ever been, is a recipe for disaster that could make what's happening in Gaza seem like a relaxing vacation. They will have absolutely no problem condemning billions to untold suffering and death if it means they can make a few more dollars.
We need to figure our shit out as a species before we birth that kind of technology or else we're all going to suffer immensely.
Even if we had an AGI that gave the steps to fix the world and prevent mass extinction, and generate a solution for the US to stop all wars. It wouldn't make a difference because those in charge simply wouldn't listen to it. In fact, generative AI gives you answers for these peace and slowing down climate change based off real academic work on and those in charge ignore both AI they claim to trust and the scholars who spend their whole lives finding solutions to.
-
This post did not contain any content.
I must be one of the few reaming people that have never, and will never- type a sentence into an AI prompt.
I despise that garbage.
-
It's important to remember that there's a lot of money being put into A.I. and therefore a lot of propaganda about it.
This happened with a lot of shitty new tech, and A.I. is one of the biggest examples of this I've known about.
All I can write is that, if you know what kind of tech you want and it's satisfactory, just stick to that. That's what I do.
Don't let ads get to you.First post on a lemmy server, by the way. Hello!
Reminds me of the way NFTs were pushed. I don’t think any regular person cared about them or used them, it was just astroturfed to fuck.
-
I must be one of the few reaming people that have never, and will never- type a sentence into an AI prompt.
I despise that garbage.
At least knowingly. It seems some customer service stuff feeds it direct to AI before any human gets involved.
-
You are not correct about the energy use of prompts. They are not very energy intensive at all. Training the AI, however, is breaking the power grid.
Sam Altman, or whatever fuck his name is, asked users to stop saying please and thank you to chatgpt because it was costing the company millions. Please and thank you are the less power hungry questions chatgpt gets. And its costing chatgpt millions. Probably 10s of millions of dollars if the CEO made a public comment about it.
You're right training is hella power hungry, but even using gen ai has heavy power costs
-
There was a quote about how Silicon Valley isn't a fortune teller betting on the future. It's a group of rich assholes that have decided what the future would look like and are pushing technology that will make that future a reality.
Welcome to Lemmy!
Classic Torment Nexus moment over and over again really
-
This post did not contain any content.
Not just you. Ai is making people dumber. I am frequently correcting the mistakes of my colleagues that use.
-
Even if we had an AGI that gave the steps to fix the world and prevent mass extinction, and generate a solution for the US to stop all wars. It wouldn't make a difference because those in charge simply wouldn't listen to it. In fact, generative AI gives you answers for these peace and slowing down climate change based off real academic work on and those in charge ignore both AI they claim to trust and the scholars who spend their whole lives finding solutions to.
Yep, those are some of the things we need to figure out before we hand our collective fate to the people in charge. I think we should start by figuring out how to keep shitty people from getting or staying in charge.
-
Me and the homies all hate ai. The only thing people around me seem to use ai for is essentially just snapchat filters. Those people couldn’t muster a single fuck about the harms ai has done though.
The only thing people around me seem to use ai for is essentially code completion, test case development and email summaries. I don't know a single person who uses Snapchat. It's like the world is diverse and tools have uses.
"I hate tunnel boring machines, none of my buddies has an use for a tunnel boring machine, and they are expensive and consume a ton of energy"