Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End
-
Technology in most cases progresses on a logarithmic scale when innovation isn't prioritized. We've basically reached the plateau of what LLMs can currently do without a breakthrough. They could absorb all the information on the internet and not even come close to what they say it is. These days we're in the "bells and whistles" phase where they add unnecessary bullshit to make it seem new like adding 5 cameras to a phone or adding touchscreens to cars. Things that make something seem fancy by slapping buzzwords and features nobody needs without needing to actually change anything but bump up the price.
I remember listening to a podcast that’s about explaining stuff according to what we know today (scientifically). The guy explaining is just so knowledgeable about this stuff and he does his research and talk to experts when the subject involves something he isn’t himself an expert.
There was this episode where he kinda got into the topic of how technology only evolves with science (because you need to understand the stuff you’re doing and you need a theory of how it works before you make new assumptions and test those assumptions). He gave an example of the Apple visionPro being a machine that despite being new (the hardware capabilities, at least), the algorithm for tracking eyes they use was developed decades ago and was already well understood and proven correct by other applications.
So his point in the episode is that real innovation just can’t be rushed by throwing money or more people at a problem. Because real innovation takes real scientists having novel insights and experiments to expand the knowledge we have. Sometimes those insights are completely random, often you need to have a whole career in that field and sometimes it takes a new genius to revolutionize it (think Newton and Einstein).
Even the current wave of LLMs are simply a product of the Google’s paper that showed we could parallelize language models, leading to the creation of “larger language models”. That was Google doing science. But you can’t control when some new breakthrough is discovered, and LLMs are subject to this constraint.
In fact, the only practice we know that actually accelerates science is the collaboration of scientists around the world, the publishing of reproducible papers so that others can expand upon and have insights you didn’t even think about, and so on.
-
I like my project manager, they find me work, ask how I'm doing and talk straight.
It's when the CEO/CTO/CFO speaks where my eyes glaze over, my mouth sags, and I bounce my neck at prompted intervals as my brain retreats into itself as it frantically tosses words and phrases into the meaning grinder and cranks the wheel, only for nothing to come out of it time and time again.
COs are corporate politicians, media trained to only say things which are completely unrevealing and lacking of any substance.
This is by design so that sensitive information is centrally controlled, leaks are difficult, and sudden changes in direction cause the minimum amount of whiplash to ICs as possible.
I have the same reaction as you, but the system is working as intended. Better to just shut it out and use the time to think about that issue you're having on a personal cat project or what toy to buy for your cat's birthday.
-
Yeah he should be using real art like stock photos and shitty clip art
If his business can't afford to pay someone qualified to do the work, the business shouldn't exist.
-
This post did not contain any content.
It's ironic how conservative the spending actually is.
Awesome ML papers and ideas come out every week. Low power training/inference optimizations, fundamental changes in the math like bitnet, new attention mechanisms, cool tools to make models more controllable and steerable and grounded. This is all getting funded, right?
No.
Universities and such are putting out all this research, but the big model trainers holding the purse strings/GPUs are not using them. They just keep releasing very similar, mostly bog standard transformers models over and over again, bar a tiny expense for a little experiment here and there. In other words, it’s full corporate: tiny, guaranteed incremental improvements without changing much, and no sharing with each other. It’s hilariously inefficient.
Deepseek is what happens when a company is smart but resource constrained. An order of magnitude more efficient, and even their architecture was very conservative.
-
I agree that it's editorialized compared to the very neutral way the survey puts it. That said, I think you also have to take into account how AI has been marketed by the industry.
They have been claiming AGI is right around the corner pretty much since chatGPT first came to market. It's often implied (e.g. you'll be able to replace workers with this) or they are more vague on timeline (e.g. OpenAI saying they believe their research will eventually lead to AGI).
With that context I think it's fair to editorialize to this being a dead-end, because even with billions of dollars being poured into this, they won't be able to deliver AGI on the timeline they are promising.
Yeah, it does some tricks, some of them even useful, but the investment is not for the demonstrated capability or realistic extrapolation of that, it is for the sort of product like OpenAI is promising equivalent to a full time research assistant for 20k a month. Which is way more expensive than an actual research assistant, but that's not stopping them from making the pitch.
-
The actual survey result:
Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed.
So they're not saying the entire industry is a dead end, or even that the newest phase is. They're just saying they don't think this current technology will make AGI when scaled. I think most people agree, including the investors pouring billions into this. They arent betting this will turn to agi, they're betting that they have some application for the current ai. Are some of those applications dead ends, most definitely, are some of them revolutionary, maybe
Thus would be like asking a researcher in the 90s that if they scaled up the bandwidth and computing power of the average internet user would we see a vastly connected media sharing network, they'd probably say no. It took more than a decade of software, cultural and societal development to discover the applications for the internet.
Right, simply scaling won’t lead to AGI, there will need to be some algorithmic changes. But nobody in the world knows what those are yet. Is it a simple framework on top of LLMs like the “atom of thought” paper? Or are transformers themselves a dead end? Or is multimodality the secret to AGI? I don’t think anyone really knows.
-
But from a grammatical sense it’s the opposite. In a sentence, a comma is a short pause, while a period is a hard stop. That means it makes far more sense for the comma to be the thousands separator and the period to be the stop between integer and fraction.
I have no strong preference either way. I think both are valid and sensible systems, and it's only confusing because of competing standards. I think over long enough time, due to the internet, the period as the decimal separator will prevail, but it's gonna happen normally, it's not something we can force. Many young people I know already use it that way here in Germany
-
The actual survey result:
Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed.
So they're not saying the entire industry is a dead end, or even that the newest phase is. They're just saying they don't think this current technology will make AGI when scaled. I think most people agree, including the investors pouring billions into this. They arent betting this will turn to agi, they're betting that they have some application for the current ai. Are some of those applications dead ends, most definitely, are some of them revolutionary, maybe
Thus would be like asking a researcher in the 90s that if they scaled up the bandwidth and computing power of the average internet user would we see a vastly connected media sharing network, they'd probably say no. It took more than a decade of software, cultural and societal development to discover the applications for the internet.
I think most people agree, including the investors pouring billions into this.
The same investors that poured (and are still pouring) billions into crypto, and invested in sub-prime loans and valued pets.com at $300M? I don't see any way the companies will be able to recoup the costs of their investment in "AI" datacenters (i.e. the $500B Stargate or $80B Microsoft; probably upwards of a trillion dollars globally invested in these data-centers).
-
The actual survey result:
Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed.
So they're not saying the entire industry is a dead end, or even that the newest phase is. They're just saying they don't think this current technology will make AGI when scaled. I think most people agree, including the investors pouring billions into this. They arent betting this will turn to agi, they're betting that they have some application for the current ai. Are some of those applications dead ends, most definitely, are some of them revolutionary, maybe
Thus would be like asking a researcher in the 90s that if they scaled up the bandwidth and computing power of the average internet user would we see a vastly connected media sharing network, they'd probably say no. It took more than a decade of software, cultural and societal development to discover the applications for the internet.
The bigger loss is the ENORMOUS amounts of energy required to train these models. Training an AI can use up more than half the entire output of the average nuclear plant.
AI data centers also generate a ton of CO². For example, training an AI produces more CO² than a 55 year old human has produced since birth.
Complete waste.
-
I like my project manager, they find me work, ask how I'm doing and talk straight.
It's when the CEO/CTO/CFO speaks where my eyes glaze over, my mouth sags, and I bounce my neck at prompted intervals as my brain retreats into itself as it frantically tosses words and phrases into the meaning grinder and cranks the wheel, only for nothing to come out of it time and time again.
I just turn of my camera and turn on Forza Motorsport or something like that
-
I like my project manager, they find me work, ask how I'm doing and talk straight.
It's when the CEO/CTO/CFO speaks where my eyes glaze over, my mouth sags, and I bounce my neck at prompted intervals as my brain retreats into itself as it frantically tosses words and phrases into the meaning grinder and cranks the wheel, only for nothing to come out of it time and time again.
The number of times my CTO says we're going to do THING, only to have to be told that this isn't how things work...
-
As an experienced software dev I'm convinced my software quality has improved by using AI.
Then your software quality was extreme shit before. It's still shit, but an improvement. So, yay "AI", I guess?
That seems like just wishful thinking on your part, or maybe you haven't learned how to use these tools properly.
-
You are insulting a person, because they said ai helps them.
They really did that themselves.
-
That seems like just wishful thinking on your part, or maybe you haven't learned how to use these tools properly.
Na, the tools suck. I'm not using a rubber hammer to get woodscrews into concrete and I'm not using "AI" for something that requires a brain. I've looked at "AI" suggestions for coding and it was >95% garbage. If "AI" makes someone a better coder it tells more about that someone than "AI".
-
I want to believe that commoditization of AI will happen as you describe, with AI made by devs for devs.
So far what I see is "developer productivity is now up and 1 dev can do the work of 3? Good, fire 2 devs out of 3. Or you know what? Make it 5 out of 6, because the remaining ones should get used to working 60 hours/week."All that increased dev capacity needs to translate into new useful products. Right now the "new useful product" that all energies are poured into is... AI itself. Or even worse, shoehorning "AI-powered" features in all existing product, whether it makes sense or not (welcome, AI features in MS Notepad!). Once this masturbatory stage is over and the dust settles, I'm pretty confident that something new and useful will remain but for now the level of hype is tremendous!
Good, fire 2 devs out of 3.
Companies that do this will fail.
Successful companies respond to this by hiring more developers.
Consider the taxi cab driver:
With the invention if the automobile, cab drivers could do their job way faster and way cheaper.
Did companies fire drivers in response? God no. They hired more
Why?
Because they became more affordable, less wealthy clients could now afford their services which means demand went way way up
If you can do your work for half the cost, usually demand goes up by way more than x2 because as you go down in wealth levels of target demographics, your pool of clients exponentially grows
If I go from "it costs me 100k to make you a website" to "it costs me 50k to make you a website" my pool of possible clients more than doubles
Which means... you need to hire more devs asap to start matching this newfound level of demand
If you fire devs when your demand is about to skyrocket, you fucked up bad lol
-
I think the human in the loop currently needs to know what the LLM produced or checked, but they'll get better.
For sure, much like how a cab driver has to know how to drive a cab.
AI is absolutely a "garbage in, garbage out" tool. Just having it doesn't automatically make you good at your job.
The difference in someone who can weild it well vs someone who has no idea what they are doing is palpable.
-
We are having massive exponential increases in output with all sorts of innovations, every few weeks another big step forward happens
Around a year ago I bet a friend $100 we won't have AGI by 2029, and I'd do the same today. LLMs are nothing more than fancy predictive text and are incapable of thinking or reasoning. We burn through immense amounts of compute and terabytes of data to train them, then stick them together in a convoluted mess, only to end up with something that's still dumber than the average human. In comparison humans are "trained" with maybe ten thousand "tokens" and ten megajoules of energy a day for a decade or two, and take only a couple dozen watts for even the most complex thinking.
-
Na, the tools suck. I'm not using a rubber hammer to get woodscrews into concrete and I'm not using "AI" for something that requires a brain. I've looked at "AI" suggestions for coding and it was >95% garbage. If "AI" makes someone a better coder it tells more about that someone than "AI".
Then try writing the code yourself and ask ChatGPT's o3-mini-high to critique your code (be sure to explain the context).
Or ask it to produce unit tests - even if they're not perfect from the get go I promise you will save time by having a starting skeleton.
-
Around a year ago I bet a friend $100 we won't have AGI by 2029, and I'd do the same today. LLMs are nothing more than fancy predictive text and are incapable of thinking or reasoning. We burn through immense amounts of compute and terabytes of data to train them, then stick them together in a convoluted mess, only to end up with something that's still dumber than the average human. In comparison humans are "trained" with maybe ten thousand "tokens" and ten megajoules of energy a day for a decade or two, and take only a couple dozen watts for even the most complex thinking.
Humans are “trained” with maybe ten thousand “tokens” per day
Uhhh... you may wanna rerun those numbers.
It's waaaaaaaay more than that lol.
and take only a couple dozen watts for even the most complex thinking
Mate's literally got smoke coming out if his ears lol.
A single
Wh
is 860 calories...I think you either have no idea wtf you are talking about, or your just made up a bunch of extremely wrong numbers to try and look smart.
-
Humans will encounter hundreds of thousands of tokens per day, ramping up to millions in school.
-
An human, by my estimate, has burned about 13,000 Wh by the time they reach adulthood. Maybe more depending in activity levels.
-
While yes, an AI costs substantially more
Wh
, it also is done in weeks so it's obviously going to be way less energy efficient due to the exponential laws of resistance. If we grew a functional human in like 2 months it'd prolly require way WAY more than 13,000Wh
during the process for similiar reasons. -
Once trained, a single model can be duplicated infinitely. So it'd be more fair to compare how much millions of people cost to raise, compared to a single model to be trained. Because once trained, you can now make millions of copies of it...
-
Operating costs are continuing to go down and down and down. Diffusion based text generation just made another huge leap forward, reporting around a twenty times efficiency increase over traditional gpt style LLMs. Improvements like this are coming out every month.
-
-
They really did that themselves.
You hate ai or not, maybe you just found one more excuse to be an asshole online, don't know, don't care, bye.