Is It Just Me?
-
I'm calling it now– it's quantum computing.
I have some friends who work in it, and I've watched and read damn near everything I can on it (including a few uni courses). It is neat, it has uses, it will not install transform all computing or invalidate all security or anything like that. It's gonna be oversold as fuck.
3 blue 1 brown has great videos on it. Grover's Algorithm, the best we can think to try to apply, is √N faster than traditional computing. Which is a lot faster for intense stuff like protein folding, but it's linearly faster. SHA256 encryption still would take an eternity to brute force, just a smaller eternity.
Probably right, but to be fair it’s “been” quantum computing since the 90’s.
-
I'm putting a presentation on at work about the downsides of AI next month, please feed me. Together, we can stop the madness and pop this goddamn bubble.
wrote last edited by [email protected]Get thee hence to the fuck_ai community. You will be given sustenance.
-
The worst is in the workplace. When people routinely tell me they looked something up with AI, I now have to assume that I can't trust what they say anylonger because there is a high chance they are just repeating some AI halucination. It is really a sad state of affairs.
wrote last edited by [email protected]I am way less hostile to Genai (as a tech) than most and even I've grown to hate this scenario. I am a subject matter expert on some things and I've still had people trying to waste my time to prove their AI hallucinations wrong.
-
I think a healthier perspective would involve more shades of grey. There are real issues with power consumption and job displacement. There are real benefits with better access to information and getting more done with limited resources. But I expect bringing any nuance into the conversation will get me downvoted to hell.
There are real benefits with better access to information and getting more done with limited resources.
If there were, someone would have made that product and it would be profitable.
But they ain’t and it isn’t because those benefits are miniscule. The only cases we know of where that was the actual story turn out to be outsourcing to India and calling it AI.
-
I'm mostly annoyed that I have to keep explaining to people that 95% of what they hear about AI is marketing. In the years since we bet the whole US economy on AI and were told it's absolutely the future of all things, it's yet to produce a really great work of fiction (as far as we know), a groundbreaking piece of software of it's own production or design, or a blockbuster product that I'm aware of.
We're betting our whole future on a concept of a product that has yet to reliably profit any of its users or the public as a whole.
I've made several good faith efforts at getting it to produce something valuable or helpful to me. I've done the legwork on making sure I know how to ask it for what I want, and how I can better communicate with it.
But AI "art" requires an actual artist to clean it up. AI fiction requires a writer to steer it or fix it. AI non-fiction requires a fact cheker. AI code requires a coder. At what point does the public catch on that the emperor has no clothes?
it's yet to produce a really great work of fiction (as far as we know), a groundbreaking piece of software of it's own production or design, or a blockbuster product
Or a profit. Or hell even one of those things that didn’t suck! It’s critically flawed and has been defying gravity on the coke-fueled dreams of silicon VC this whole time.
And still. One of next year’s fiscal goals is “AI”. That’s all. Just “AI”.
It’s a goal. Somehow. It’s utter insanity.
-
Yeah, but fuck copyright.
You realize that what I'm talking about protects the artist, right? I'm not talking about the RIAA or MPAA, I'm talking about the creating artist's legal ownership of their work.
-
There are real benefits with better access to information and getting more done with limited resources.
If there were, someone would have made that product and it would be profitable.
But they ain’t and it isn’t because those benefits are miniscule. The only cases we know of where that was the actual story turn out to be outsourcing to India and calling it AI.
Idk why profitability is the bar for you. It’s common practice for tech companies to be non-profitable for years as they invest in R&D and capturing market share.
But products like Claude absolutely provide both of the benefits (and detriments) I mentioned
-
You realize that what I'm talking about protects the artist, right? I'm not talking about the RIAA or MPAA, I'm talking about the creating artist's legal ownership of their work.
You may not like this, but I still don't care.
-
Idk why profitability is the bar for you. It’s common practice for tech companies to be non-profitable for years as they invest in R&D and capturing market share.
But products like Claude absolutely provide both of the benefits (and detriments) I mentioned
Profitability is the point, right? Moving goods from one place to another at a loss in the expectation that scale and market share will pan out is perfectly acceptable.
Vaporware for $40/mo. Is not. Claude can’t ever be profitable at its price point, and even with the price so low its market share is abysmal. There’s delaying profitability and there’s burning unconscionable piles of cash without any idea of how it can become profitable.
AI’s adoption has been atrocious by any stretch so unless the plan us to continue to fuck up an unbelievable number of models and processes in the hope some future Ed McMahon hands over an enormous oversized novelty check worth billions, their activities are scandalously unprofitable.
-
I'm mostly annoyed that I have to keep explaining to people that 95% of what they hear about AI is marketing. In the years since we bet the whole US economy on AI and were told it's absolutely the future of all things, it's yet to produce a really great work of fiction (as far as we know), a groundbreaking piece of software of it's own production or design, or a blockbuster product that I'm aware of.
We're betting our whole future on a concept of a product that has yet to reliably profit any of its users or the public as a whole.
I've made several good faith efforts at getting it to produce something valuable or helpful to me. I've done the legwork on making sure I know how to ask it for what I want, and how I can better communicate with it.
But AI "art" requires an actual artist to clean it up. AI fiction requires a writer to steer it or fix it. AI non-fiction requires a fact cheker. AI code requires a coder. At what point does the public catch on that the emperor has no clothes?
Anyone in engineering knows the 90% of your goal is the easy bit. You’ll then spend 90% of your time on the remainder. Same for AI and getting past the uncanny valley with art.
-
I'm putting a presentation on at work about the downsides of AI next month, please feed me. Together, we can stop the madness and pop this goddamn bubble.
Ask any AI which states have the letter R in them. Watch them get it wrong, and show to colleagues how dangerous it is to rely on their results as fact.
-
it's yet to produce a really great work of fiction (as far as we know), a groundbreaking piece of software of it's own production or design, or a blockbuster product
Or a profit. Or hell even one of those things that didn’t suck! It’s critically flawed and has been defying gravity on the coke-fueled dreams of silicon VC this whole time.
And still. One of next year’s fiscal goals is “AI”. That’s all. Just “AI”.
It’s a goal. Somehow. It’s utter insanity.
The goal is "[Replace you money-needing meatsacks with] AI" but the suits don't want to say it that clearly.
-
I absolutely agree that AI is becoming a mental crutch that a disturbing number of people are snatching up and hobbling around on. It feels like the setup of Wall-E, where everyone is rooted in their floating rambler scooters.
I think the fixation on individual consumer use of AI is overstated. The bulk of the AI's energy/water use is in the modeling and endless polling. The random guy asking "@Grok is this true?" is having a negligible impact on energy usage, particularly in light of the number of automated processes that are hammering the various AI interfaces far faster than any collection of humans could.
I'm not going to use AI to write my next adventure or generate my next character. I'm not going to bemoan a player who shows up to game with a portrait with melted fingers, because they couldn't find "elf wizard in bearskin holding ice wand while standing on top of glacier" in DeviantArt.
For the vast majority of users, this is a novelty. What's more, its a novelty that's become a stand-in for the OG AI of highly optimized search engines that used to fulfill the needs we're now plugging into the chatbot machine. I get why people think it sucks and abstain from using it. I get why people who use it too much can straight up drive themselves insane. I get that our Cyberpunk style waste management strategy is going to get one of the news few generations into a nightmarish blight. But I'm not going to hang that on the head of someone who wants to sit down at a table with their friends, look them in the eye, and say "Check out this cool new idea I turned into a playable character".
Because if you're at the table and you're excited to play with other humans in a game about going out into the world on adventures, that's as good an antedote to AI as I could come up with.
And hey, as a DM? If you want to introduce the Mind Flayer "Idea Sucker" machine that lures people into its brain-eating maw by promising to give them genius powers? And maybe you want to name the Mind Flayer Lord behind the insidious plot Beff Jezos or Mealon Husk or something? Maybe that's a good way to express your frustration with the state of things.
As someone who's GM'ed tabletops, I find it interesting that players who froth at the mouth at the existance of an AI token because "AI commits possibly piracy and art theft" then turn around and insist on me doing / do the "pick the image from searching the internet", which if you've ever browsed an art site, would know that doing such a thing is actual piracy and art theft, especially with artists that have the 40 page long terms and conditions, and an interesting number of "use in tabletops forbidden" clauses.
-
I'm mostly annoyed that I have to keep explaining to people that 95% of what they hear about AI is marketing. In the years since we bet the whole US economy on AI and were told it's absolutely the future of all things, it's yet to produce a really great work of fiction (as far as we know), a groundbreaking piece of software of it's own production or design, or a blockbuster product that I'm aware of.
We're betting our whole future on a concept of a product that has yet to reliably profit any of its users or the public as a whole.
I've made several good faith efforts at getting it to produce something valuable or helpful to me. I've done the legwork on making sure I know how to ask it for what I want, and how I can better communicate with it.
But AI "art" requires an actual artist to clean it up. AI fiction requires a writer to steer it or fix it. AI non-fiction requires a fact cheker. AI code requires a coder. At what point does the public catch on that the emperor has no clothes?
What if the point of AI is to have it create a personal model for each of us, using the vast amounts of our data they have access to, in order to manipulate us into buying and doing whatever the people who own it want but they can't just come out and say that?
-
This is a great representation of why not to argue with someone who debates like this.
Arguments like these are like Hydras. Start tackling any one statement that may be taken out of context, or have more nuance, or is a complete misrepresentation, and two more pop up.
It sucks because true, good points get lost in the tangle.
wrote last edited by [email protected]For instance, there are soft science, social interaction areas where AI is doing wonders.
Specifically, in the field of law, now that lawyers have learned not to rely on AI for citations, they are instead offloading hundreds of thousands or millions of pages of documents that they were never actually going to read, and getting salient results from allowing an AI to scan through them to pull out interesting talking points.
Pulling out these interesting talking points and fact checking them and, you know, A/B testing the ways to interact and bring them in front of the jury with an AI has made it so that many law firms are getting thousands or millions of dollars more on a lawsuit than they anticipated.
And you may be against American law for all of its frivolous plaintiffs' lawsuits or something, but each of these outcomes are decided by human beings, and there are real damages that are lifelong that are being addressed by these lawsuits, or at least in some way compensated.
The more money these plaintiffs get for the injuries that they have to live with for the rest of their lives, the better for them, and AI made the difference.
Not that lawyers are fundamentally incapable or uncaring, but for every one, I don't know who the fuck is a super lawyer nowadays, but you know, for every, you know, madman lawyer on the planet, there's 999 that are working hard and just do not have the raw plot armor Deus Ex Machina dropping everything directly into their lap to solve all of their problems that they would need to operate at that level.
And yes, if you want to be particular, a human being should have done the work. A human being can do the work. A human being is actually being paid to do the work. But when you can offload grunt work to a computer and get usable results from it that improves a human's life, that's the whole fucking reason why we invented computers in the first place.
-
What if the point of AI is to have it create a personal model for each of us, using the vast amounts of our data they have access to, in order to manipulate us into buying and doing whatever the people who own it want but they can't just come out and say that?
It's our own version of The Matrix
-
Profitability is the point, right? Moving goods from one place to another at a loss in the expectation that scale and market share will pan out is perfectly acceptable.
Vaporware for $40/mo. Is not. Claude can’t ever be profitable at its price point, and even with the price so low its market share is abysmal. There’s delaying profitability and there’s burning unconscionable piles of cash without any idea of how it can become profitable.
AI’s adoption has been atrocious by any stretch so unless the plan us to continue to fuck up an unbelievable number of models and processes in the hope some future Ed McMahon hands over an enormous oversized novelty check worth billions, their activities are scandalously unprofitable.
wrote last edited by [email protected]Why does profit matter? I don't personally give a shit about the longevity, margins or market share of any company invested in the technology, but I am generally in favor of research and development of any technology. In most research it's hard to predict the future applications.
That's not to say development is always smart, or safe, or ethical, just that it has to happen in order to see where this goes. Even if there's an end point it's helpful to know where it is.
Unfortunately, capitalism requires a sacrifice to the economy in order to pursue anything. That's what sucks about this. If we weren't hard wired to justify existence in capital there wouldn't be so much occlusive hype around it.
-
Yes, you're the weird one. Once you realize that 43% of the USA is FUNCTIONALLY ILLITERATE you start realizing why people are so enamored with AI. (since I know some twat is gonna say shit: I'm using the USA here as an example, I'm not being us-centric)
Our artificial intelligence, is smarter than 50% of the population (don't get started on 'hallucinations'...do you know how many hallucinations the average person has every day?!) -- and is stupider than the top 20% of the population.
The top 20%, wonder if everyone has lost their fucking minds, because to them it looks like it is completely worthless.
It's more just that the top 20% are naive to the stupidity of the average person.
I have to say, I don't agree with some of your other points elsewhere here, but this makes a lot of sense.
-
I have a hard time believing that article’s guesstimate since Google (who actually runs these data centers and doesn’t have to guess) just published a report stating that the median prompt uses about a quarter of a watt-hour, or the equivalent of running a microwave oven for one second. You’re absolutely right that flights use an unconscionable amount of energy. Perhaps your advocacy time would be much better spent fighting against that.
And Google would never lie about how much energy a prompt costs, right?
Especially not since they have an invested interest in having people use their AI products, right?
-
Real researchers make up studies to cite in their reports? Real lawyers and judges cite fake cases as precedents in legal preceding? Real doctors base treatment plans on white papers they completely fabricated in their heads? Yeah I don't think so, buddy.
wrote last edited by [email protected]I think they’re saying that the kind of people who take LLM generated content as fact are the kind of people who don’t know how to look up information in the first place. Blaming the LLM for it is like blaming a search engine for showing bad results.
Of course LLMs make stuff up, they are machines that make stuff up.
Sort of an aside, but doctors, lawyers, judges and researchers make shit up all the time. A professional designation doesn't make someone infallible or even smart. People should question everything they read, regardless of the source.