Is It Just Me?
-
On the contrary: society has repeatedly rejected a lot of ideas that industries have come up with.
HD DVD, 3D TV, Crypto Currency, NFT's, Laser Discs, 8-track tapes, UMD's. A decade ago everyone was hyping up how VR would be the future of gaming, yet it's still a niche novelty today.
The difference with AI is that I don't think I've ever seen a supply side push this strong before. I'm not seeing a whole lot of demand for it from individual people. It's "oh this is a neat little feature I can use" not "this technology is going to change my life" the way that the laundry machine, the personal motor vehicle, the telephone, or the internet did. I could be wrong but I think that as long as we can survive the bubble bursting, we will come out on the other side with LLM's being a blip on the radar. And one consequence will be that if anyone makes a real AI they will need to call it something else for marketing purposes because "AI" will be ruined.
wrote last edited by [email protected]HDDVDs weren’t rejected by the masses they were a casualty in Sony’s vendetta against the loss of Beta and DAT. Both of which were rejected by industry not consumers (though both were later embraced by industry and Betas even outlasted VHSs). They would have won out for the same reasons that Sony lost the previous format wars (insistence on licensing fees) except this time Sony bought out Columbia and had a whole library of video and a studio to make new movies to exclusively release on their format. Essentially the supply side pushing something until consumers accepted it, though to your point not quite as bad as AI is right now.
8-Tracks and laserdiscs were just replaced by better formats (Compact Cassette and Video CD/DVD respectively). Each of them were also replacements for previous formats like Reel to Reel and CEDs.
UMDs only don’t exist still because flash media got better and because Sony opted to use a cheaper scratch resistant coating instead of a built in case for later formats (like Blu-ray). Also, UMDs themselves were a replacement for or at least inspired by an earlier format called MiniDisc.
Capitalism’s biggest feat has been convincing people that everything is the next big thing and nothing that has come before is similar when just about everything is just a rinse and repeat, even LLMs… remember when Watson beat Ken Jennings?
-
wrote last edited by [email protected]
Whenever someone bitches about em dashes I assume they haven't read books.
-
Problem is, many things I have do not have packaging with nutritional values and similar and I need to use internet for this, which AI usually is the fastest to explain, especially because English is not my first language and food I am eating is not well known in English (Balkan)
Yeah, I always used a generic app for counting calories. You could look up raw ingredients, add them to a list, then get a nutritional value and calories for the whole list (ie recipe) and even save that and share it. I’m guess apps like this probably rely on AI now though too. I think it was just called “calorie counter” with a blue logo. Some of them have international barcode scanners too but it is still a lot of guessing and it takes time if you’re not preparing the same things regularly. But they had a pretty robust user curated database for non-packaged foods. You just had to choose what was closest to what you were using or investigate and make your own custom entries for later.
-
I mean - propaganda has in fact gotten us to the shittiest administration possible. AI hype is off-the-scale for anything - more than The Space Race, more than, well, anything. And it isn’t even useful!
It’s far and away a different thang than a new medium about, by, and for humans.
I agree. I would say we’re at the cusp of a new technological revolution. Our world is changing fundamentally and rapidly.
-
OTOH you haven't heard of NFTs in a while because AI hype replaced it, so... what hell spawn is going to replace the AI hype?
I'm calling it now– it's quantum computing.
I have some friends who work in it, and I've watched and read damn near everything I can on it (including a few uni courses). It is neat, it has uses, it will not install transform all computing or invalidate all security or anything like that. It's gonna be oversold as fuck.
3 blue 1 brown has great videos on it. Grover's Algorithm, the best we can think to try to apply, is √N faster than traditional computing. Which is a lot faster for intense stuff like protein folding, but it's linearly faster. SHA256 encryption still would take an eternity to brute force, just a smaller eternity.
-
The way I look at it is that I haven't heard anything about NFTs in a while. The bubble will burst soon enough when investors realize that it's not possible to get much better without a significant jump forward in computing technology.
We're running out of atomic room to make thing smaller just a little more slowly than we're running out of ways to even make smaller things, and for a computer to think like, as well as as quickly or faster than a person we need processing power to continue to increase exponentially per unit of space. Silicon won't get us there.
This is a good take for a lot of reasons.
In part because NFTs are still used and have some interesting applications, but 90% of the marketing and use cases were companies trying to profit from the hype train.
-
You are not correct about the energy use of prompts. They are not very energy intensive at all. Training the AI, however, is breaking the power grid.
wrote last edited by [email protected]Maybe not an individual prompt, but with how many prompts are made for stupid stuff every day, it will stack up to quite a lot of CO2 in the long run.
Not denying the training of AI is demanding way more energy, but that doesn't really matter as both the action of manufacturing, training and millions of people using AI amounts to the same bleak picture long term.
Considering how the discussion about environmental protection has only just started to be taken seriously and here they come and dump this newest bomb on humanity, it is absolutely devastating that AI has been allowed to run rampant everywhere.
According to this article, 500.000 AI prompts amounts to the same CO2 outlet as a
round-trip flight from London to New York.
I don't know how many times a day 500.000 AI prompts are reached, but I'm sure it is more than twice or even thrice. As time moves on it will be much more than that. It will probably outdo the number of actual flights between London and New York in a day. Every day. It will probably also catch up to whatever energy cost it took to train the AI in the first place and surpass it.
Because you know. People need their memes and fake movies and AI therapist chats and meal suggestions and history lessons and a couple of iterations on that book report they can't be fucked to write. One person can easily end up prompting hundreds of times in a day without even thinking about it. And if everybody starts using AI to think for them at work and at home, it'll end up being many, many, many flights back and forth between London and New York every day.
-
VR was and is also still a very inaccessible tool for most people. It costs a lot of money and time to even get to the point where you're getting the intended VR experience and that is what it mostly boils down to: an experience. It isn't convenient or useful and people can't afford it. And even though there are many gamers out there, most people aren't gamers and don't care about mounting a VR headset on their cranium and getting seasick for a few minutes.
AI is not only accessible and convenient, it is also useful to the everyday person, if the AI doesn't hallucinate like hell, that is. It has the potential to optimize workloads in jobs with a lot of paperwork, calculations and so on.
I completely agree with you that AI is being pushed very aggressively in ways we haven't seen before and that is because the tech people and their investors poured a lot of money into developing these things. They need it to be a success so they can earn their money back and they will be successful eventually because everybody with money and power has a huge interest in this tool becoming a part of everyday life. It can be used to control the masses in ways we cannot even imagine yet and it can earn the creators and investors a lot of money.
They are already making AI computers. According to some it will entirely replace the types of computers we are used to today. From what I can understand, it will be preferable to the open AI setups we have currently that are burning our planet to a crisp with the amount of data centers that need to keep them active. Supposedly the AI computer will have it be a local thing on the laptop and it will therefore demand less resources, but I'm so fucking skeptic about all this shit that I'm waiting to see how much power a computer with an AI operating system will need to swallow in energy. I'm too tech-ignorant to understand the ins and outs of what this and that means, but we are definitely going to have to accept that AI is here to stay and the current setup with open AIs and forced LLM's in every search engine is a massive environmental nightmare. It probably won't stop or change a fucking lick because people don't give a fuck as long as they are comfortable and the companies are getting people to use their trash tech just like they wanted so they won't stop it either.
AI is not only accessible and convenient, it is also useful to the everyday person, if the AI doesn't hallucinate like hell, that is.
This is literally the pitch burning hundreds of billions of dollars into ash. It’s insane.
-
This post did not contain any content.
I'm mostly annoyed that I have to keep explaining to people that 95% of what they hear about AI is marketing. In the years since we bet the whole US economy on AI and were told it's absolutely the future of all things, it's yet to produce a really great work of fiction (as far as we know), a groundbreaking piece of software of it's own production or design, or a blockbuster product that I'm aware of.
We're betting our whole future on a concept of a product that has yet to reliably profit any of its users or the public as a whole.
I've made several good faith efforts at getting it to produce something valuable or helpful to me. I've done the legwork on making sure I know how to ask it for what I want, and how I can better communicate with it.
But AI "art" requires an actual artist to clean it up. AI fiction requires a writer to steer it or fix it. AI non-fiction requires a fact cheker. AI code requires a coder. At what point does the public catch on that the emperor has no clothes?
-
Real researchers make up studies to cite in their reports? Real lawyers and judges cite fake cases as precedents in legal preceding? Real doctors base treatment plans on white papers they completely fabricated in their heads? Yeah I don't think so, buddy.
But but but . . . !!!
AI!!
-
This post did not contain any content.
I think a healthier perspective would involve more shades of grey. There are real issues with power consumption and job displacement. There are real benefits with better access to information and getting more done with limited resources. But I expect bringing any nuance into the conversation will get me downvoted to hell.
-
I'm putting a presentation on at work about the downsides of AI next month, please feed me. Together, we can stop the madness and pop this goddamn bubble.
wrote last edited by [email protected]Gemini, feed them some downsides of AI
-
Maybe not an individual prompt, but with how many prompts are made for stupid stuff every day, it will stack up to quite a lot of CO2 in the long run.
Not denying the training of AI is demanding way more energy, but that doesn't really matter as both the action of manufacturing, training and millions of people using AI amounts to the same bleak picture long term.
Considering how the discussion about environmental protection has only just started to be taken seriously and here they come and dump this newest bomb on humanity, it is absolutely devastating that AI has been allowed to run rampant everywhere.
According to this article, 500.000 AI prompts amounts to the same CO2 outlet as a
round-trip flight from London to New York.
I don't know how many times a day 500.000 AI prompts are reached, but I'm sure it is more than twice or even thrice. As time moves on it will be much more than that. It will probably outdo the number of actual flights between London and New York in a day. Every day. It will probably also catch up to whatever energy cost it took to train the AI in the first place and surpass it.
Because you know. People need their memes and fake movies and AI therapist chats and meal suggestions and history lessons and a couple of iterations on that book report they can't be fucked to write. One person can easily end up prompting hundreds of times in a day without even thinking about it. And if everybody starts using AI to think for them at work and at home, it'll end up being many, many, many flights back and forth between London and New York every day.
I have a hard time believing that article’s guesstimate since Google (who actually runs these data centers and doesn’t have to guess) just published a report stating that the median prompt uses about a quarter of a watt-hour, or the equivalent of running a microwave oven for one second. You’re absolutely right that flights use an unconscionable amount of energy. Perhaps your advocacy time would be much better spent fighting against that.
-
I'm calling it now– it's quantum computing.
I have some friends who work in it, and I've watched and read damn near everything I can on it (including a few uni courses). It is neat, it has uses, it will not install transform all computing or invalidate all security or anything like that. It's gonna be oversold as fuck.
3 blue 1 brown has great videos on it. Grover's Algorithm, the best we can think to try to apply, is √N faster than traditional computing. Which is a lot faster for intense stuff like protein folding, but it's linearly faster. SHA256 encryption still would take an eternity to brute force, just a smaller eternity.
Probably right, but to be fair it’s “been” quantum computing since the 90’s.
-
I'm putting a presentation on at work about the downsides of AI next month, please feed me. Together, we can stop the madness and pop this goddamn bubble.
wrote last edited by [email protected]Get thee hence to the fuck_ai community. You will be given sustenance.
-
The worst is in the workplace. When people routinely tell me they looked something up with AI, I now have to assume that I can't trust what they say anylonger because there is a high chance they are just repeating some AI halucination. It is really a sad state of affairs.
wrote last edited by [email protected]I am way less hostile to Genai (as a tech) than most and even I've grown to hate this scenario. I am a subject matter expert on some things and I've still had people trying to waste my time to prove their AI hallucinations wrong.
-
I think a healthier perspective would involve more shades of grey. There are real issues with power consumption and job displacement. There are real benefits with better access to information and getting more done with limited resources. But I expect bringing any nuance into the conversation will get me downvoted to hell.
There are real benefits with better access to information and getting more done with limited resources.
If there were, someone would have made that product and it would be profitable.
But they ain’t and it isn’t because those benefits are miniscule. The only cases we know of where that was the actual story turn out to be outsourcing to India and calling it AI.
-
I'm mostly annoyed that I have to keep explaining to people that 95% of what they hear about AI is marketing. In the years since we bet the whole US economy on AI and were told it's absolutely the future of all things, it's yet to produce a really great work of fiction (as far as we know), a groundbreaking piece of software of it's own production or design, or a blockbuster product that I'm aware of.
We're betting our whole future on a concept of a product that has yet to reliably profit any of its users or the public as a whole.
I've made several good faith efforts at getting it to produce something valuable or helpful to me. I've done the legwork on making sure I know how to ask it for what I want, and how I can better communicate with it.
But AI "art" requires an actual artist to clean it up. AI fiction requires a writer to steer it or fix it. AI non-fiction requires a fact cheker. AI code requires a coder. At what point does the public catch on that the emperor has no clothes?
it's yet to produce a really great work of fiction (as far as we know), a groundbreaking piece of software of it's own production or design, or a blockbuster product
Or a profit. Or hell even one of those things that didn’t suck! It’s critically flawed and has been defying gravity on the coke-fueled dreams of silicon VC this whole time.
And still. One of next year’s fiscal goals is “AI”. That’s all. Just “AI”.
It’s a goal. Somehow. It’s utter insanity.
-
Yeah, but fuck copyright.
You realize that what I'm talking about protects the artist, right? I'm not talking about the RIAA or MPAA, I'm talking about the creating artist's legal ownership of their work.
-
There are real benefits with better access to information and getting more done with limited resources.
If there were, someone would have made that product and it would be profitable.
But they ain’t and it isn’t because those benefits are miniscule. The only cases we know of where that was the actual story turn out to be outsourcing to India and calling it AI.
Idk why profitability is the bar for you. It’s common practice for tech companies to be non-profitable for years as they invest in R&D and capturing market share.
But products like Claude absolutely provide both of the benefits (and detriments) I mentioned