Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End
-
As an experienced software dev I'm convinced my software quality has improved by using AI.
Then your software quality was extreme shit before. It's still shit, but an improvement. So, yay "AI", I guess?
That seems like just wishful thinking on your part, or maybe you haven't learned how to use these tools properly.
-
You are insulting a person, because they said ai helps them.
They really did that themselves.
-
That seems like just wishful thinking on your part, or maybe you haven't learned how to use these tools properly.
Na, the tools suck. I'm not using a rubber hammer to get woodscrews into concrete and I'm not using "AI" for something that requires a brain. I've looked at "AI" suggestions for coding and it was >95% garbage. If "AI" makes someone a better coder it tells more about that someone than "AI".
-
I want to believe that commoditization of AI will happen as you describe, with AI made by devs for devs.
So far what I see is "developer productivity is now up and 1 dev can do the work of 3? Good, fire 2 devs out of 3. Or you know what? Make it 5 out of 6, because the remaining ones should get used to working 60 hours/week."All that increased dev capacity needs to translate into new useful products. Right now the "new useful product" that all energies are poured into is... AI itself. Or even worse, shoehorning "AI-powered" features in all existing product, whether it makes sense or not (welcome, AI features in MS Notepad!). Once this masturbatory stage is over and the dust settles, I'm pretty confident that something new and useful will remain but for now the level of hype is tremendous!
Good, fire 2 devs out of 3.
Companies that do this will fail.
Successful companies respond to this by hiring more developers.
Consider the taxi cab driver:
With the invention if the automobile, cab drivers could do their job way faster and way cheaper.
Did companies fire drivers in response? God no. They hired more
Why?
Because they became more affordable, less wealthy clients could now afford their services which means demand went way way up
If you can do your work for half the cost, usually demand goes up by way more than x2 because as you go down in wealth levels of target demographics, your pool of clients exponentially grows
If I go from "it costs me 100k to make you a website" to "it costs me 50k to make you a website" my pool of possible clients more than doubles
Which means... you need to hire more devs asap to start matching this newfound level of demand
If you fire devs when your demand is about to skyrocket, you fucked up bad lol
-
I think the human in the loop currently needs to know what the LLM produced or checked, but they'll get better.
For sure, much like how a cab driver has to know how to drive a cab.
AI is absolutely a "garbage in, garbage out" tool. Just having it doesn't automatically make you good at your job.
The difference in someone who can weild it well vs someone who has no idea what they are doing is palpable.
-
We are having massive exponential increases in output with all sorts of innovations, every few weeks another big step forward happens
Around a year ago I bet a friend $100 we won't have AGI by 2029, and I'd do the same today. LLMs are nothing more than fancy predictive text and are incapable of thinking or reasoning. We burn through immense amounts of compute and terabytes of data to train them, then stick them together in a convoluted mess, only to end up with something that's still dumber than the average human. In comparison humans are "trained" with maybe ten thousand "tokens" and ten megajoules of energy a day for a decade or two, and take only a couple dozen watts for even the most complex thinking.
-
Na, the tools suck. I'm not using a rubber hammer to get woodscrews into concrete and I'm not using "AI" for something that requires a brain. I've looked at "AI" suggestions for coding and it was >95% garbage. If "AI" makes someone a better coder it tells more about that someone than "AI".
Then try writing the code yourself and ask ChatGPT's o3-mini-high to critique your code (be sure to explain the context).
Or ask it to produce unit tests - even if they're not perfect from the get go I promise you will save time by having a starting skeleton.
-
Around a year ago I bet a friend $100 we won't have AGI by 2029, and I'd do the same today. LLMs are nothing more than fancy predictive text and are incapable of thinking or reasoning. We burn through immense amounts of compute and terabytes of data to train them, then stick them together in a convoluted mess, only to end up with something that's still dumber than the average human. In comparison humans are "trained" with maybe ten thousand "tokens" and ten megajoules of energy a day for a decade or two, and take only a couple dozen watts for even the most complex thinking.
Humans are “trained” with maybe ten thousand “tokens” per day
Uhhh... you may wanna rerun those numbers.
It's waaaaaaaay more than that lol.
and take only a couple dozen watts for even the most complex thinking
Mate's literally got smoke coming out if his ears lol.
A single
Wh
is 860 calories...I think you either have no idea wtf you are talking about, or your just made up a bunch of extremely wrong numbers to try and look smart.
-
Humans will encounter hundreds of thousands of tokens per day, ramping up to millions in school.
-
An human, by my estimate, has burned about 13,000 Wh by the time they reach adulthood. Maybe more depending in activity levels.
-
While yes, an AI costs substantially more
Wh
, it also is done in weeks so it's obviously going to be way less energy efficient due to the exponential laws of resistance. If we grew a functional human in like 2 months it'd prolly require way WAY more than 13,000Wh
during the process for similiar reasons. -
Once trained, a single model can be duplicated infinitely. So it'd be more fair to compare how much millions of people cost to raise, compared to a single model to be trained. Because once trained, you can now make millions of copies of it...
-
Operating costs are continuing to go down and down and down. Diffusion based text generation just made another huge leap forward, reporting around a twenty times efficiency increase over traditional gpt style LLMs. Improvements like this are coming out every month.
-
-
They really did that themselves.
You hate ai or not, maybe you just found one more excuse to be an asshole online, don't know, don't care, bye.
-
This post did not contain any content.
I have been shouting this for years. Turing and Minsky were pretty up front about this when they dropped this line of research in like 1952, even lovelace predicted this would be bullshit back before the first computer had been built.
The fact nothing got optimized, and it still didn't collapse, after deepseek? kind of gave the whole game away. there's something else going on here. this isn't about the technology, because there is no meaningful technology here.
-
This post did not contain any content.
The funny thing is with so much money you could probably do lots of great stuff with the existing AI as it is. Instead they put all the money into compute power so that they can overfit their LLMs to look like a human.
-
You hate ai or not, maybe you just found one more excuse to be an asshole online, don't know, don't care, bye.
You seem to enjoy continuing to engage.
-
No, 2,50€ is 2€ and 50ct, 2.50€ is wrong in this system. 2,500€ is also wrong (for currency, where you only care for two digits after the comma), 2.500€ is 2500€
what if you are displaying a live bill for a service billed monthly, like bandwidth, and are charged one pence/cent/(whatever eutopes hundredth is called) per gigabyte if you use a few megabytes the bill is less than a hundredth but still exists.
-
You're confusing ai art with actual art, like rendered from illustration and paintings
it's as much "real" art as photography, taking a relatively finite number of decisions and finding something that looks "good".
-
what if you are displaying a live bill for a service billed monthly, like bandwidth, and are charged one pence/cent/(whatever eutopes hundredth is called) per gigabyte if you use a few megabytes the bill is less than a hundredth but still exists.
Yes, that's true, but more of an edge case. Something like gasoline is commonly priced in fractional cents, tho:
-
Right, simply scaling won’t lead to AGI, there will need to be some algorithmic changes. But nobody in the world knows what those are yet. Is it a simple framework on top of LLMs like the “atom of thought” paper? Or are transformers themselves a dead end? Or is multimodality the secret to AGI? I don’t think anyone really knows.
No there's some ideas out there. Concepts like heirarchical reinforcement learning are more likely to lead to AGI with creation of foundational policies, problem is as it stands, it's a really difficult technique to use so it isn't used often. And LLMs have sucked all the research dollars out of any other ideas.
-
Optimizing AI performance by “scaling” is lazy and wasteful.
Reminds me of back in the early 2000s when someone would say don’t worry about performance, GHz will always go up.
It always wins in the end though. Look up the bitter lesson.
-
I agree that it's editorialized compared to the very neutral way the survey puts it. That said, I think you also have to take into account how AI has been marketed by the industry.
They have been claiming AGI is right around the corner pretty much since chatGPT first came to market. It's often implied (e.g. you'll be able to replace workers with this) or they are more vague on timeline (e.g. OpenAI saying they believe their research will eventually lead to AGI).
With that context I think it's fair to editorialize to this being a dead-end, because even with billions of dollars being poured into this, they won't be able to deliver AGI on the timeline they are promising.
Part of it is we keep realizing AGI is a lot more broader and more complex than we think
-
I remember listening to a podcast that’s about explaining stuff according to what we know today (scientifically). The guy explaining is just so knowledgeable about this stuff and he does his research and talk to experts when the subject involves something he isn’t himself an expert.
There was this episode where he kinda got into the topic of how technology only evolves with science (because you need to understand the stuff you’re doing and you need a theory of how it works before you make new assumptions and test those assumptions). He gave an example of the Apple visionPro being a machine that despite being new (the hardware capabilities, at least), the algorithm for tracking eyes they use was developed decades ago and was already well understood and proven correct by other applications.
So his point in the episode is that real innovation just can’t be rushed by throwing money or more people at a problem. Because real innovation takes real scientists having novel insights and experiments to expand the knowledge we have. Sometimes those insights are completely random, often you need to have a whole career in that field and sometimes it takes a new genius to revolutionize it (think Newton and Einstein).
Even the current wave of LLMs are simply a product of the Google’s paper that showed we could parallelize language models, leading to the creation of “larger language models”. That was Google doing science. But you can’t control when some new breakthrough is discovered, and LLMs are subject to this constraint.
In fact, the only practice we know that actually accelerates science is the collaboration of scientists around the world, the publishing of reproducible papers so that others can expand upon and have insights you didn’t even think about, and so on.
There's been several smaller breakthroughs since then that arguably would not have happened without so many scientists suddenly turning their attention to the field.
-
Imo to make an ai that is truly good at everything we need to have multiple ai all designed to do something different all working together (like the human brain works) instead of making every single ai a personality-less sludge of jack of all trades master of none
Lots of people think this. They keep turning out wrong. Look up the bitter lesson