Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End
-
I used to support an IVA cluster. Now the only thing I use AI for is voice controls to set timers on my phone.
That's what I did on my Samsung galaxy S5 a decade ago .
-
I remember listening to a podcast that’s about explaining stuff according to what we know today (scientifically). The guy explaining is just so knowledgeable about this stuff and he does his research and talk to experts when the subject involves something he isn’t himself an expert.
There was this episode where he kinda got into the topic of how technology only evolves with science (because you need to understand the stuff you’re doing and you need a theory of how it works before you make new assumptions and test those assumptions). He gave an example of the Apple visionPro being a machine that despite being new (the hardware capabilities, at least), the algorithm for tracking eyes they use was developed decades ago and was already well understood and proven correct by other applications.
So his point in the episode is that real innovation just can’t be rushed by throwing money or more people at a problem. Because real innovation takes real scientists having novel insights and experiments to expand the knowledge we have. Sometimes those insights are completely random, often you need to have a whole career in that field and sometimes it takes a new genius to revolutionize it (think Newton and Einstein).
Even the current wave of LLMs are simply a product of the Google’s paper that showed we could parallelize language models, leading to the creation of “larger language models”. That was Google doing science. But you can’t control when some new breakthrough is discovered, and LLMs are subject to this constraint.
In fact, the only practice we know that actually accelerates science is the collaboration of scientists around the world, the publishing of reproducible papers so that others can expand upon and have insights you didn’t even think about, and so on.
This also shows why the current neglect of basic/general research without a profit goal is holding back innovation.
-
I'm not AI but I'd like to say thay thing to you at no cost at all you useless bag of meat.
-
wait so the people doing the work don't get paid and the people who get paid steal from others?
that is just so uncharacteristic of capitalism, what a surprise
It’s also cultish.
Everyone was trying to ape ChatGPT. Now they’re rushing to ape Deepseek R1, since that's what is trending on social media.
It’s very late stage capitalism, yes, but that doesn’t come close to painting the whole picture. There's a lot of groupthink, an urgency to "catch up and ship" and look good quick rather than focus experimentation, sane applications and such. When I think of shitty capitalism, I think of stagnant entities like shitty publishers, dysfunctional departments, consumers abuse, things like that.
This sector is trying to innovate and make something good, but it’s like the purse holders and researchers have horse blinders on. Like they are completely captured by social media hype and can’t see much past that.
-
This post did not contain any content.
LLMs are good for learning, brainstorming, and mundane writing tasks.
-
LLMs are good for learning, brainstorming, and mundane writing tasks.
Yes, and maybe finding information right in front of them, and nothing more
-
This post did not contain any content.
Misleading title. From the article,
Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed.
In no way does this imply that the "industry is pouring billions into a dead end". AGI isn't even needed for industry applications, just implementing current-level agentic systems will be more than enough to have massive industrial impact.
-
It's ironic how conservative the spending actually is.
Awesome ML papers and ideas come out every week. Low power training/inference optimizations, fundamental changes in the math like bitnet, new attention mechanisms, cool tools to make models more controllable and steerable and grounded. This is all getting funded, right?
No.
Universities and such are putting out all this research, but the big model trainers holding the purse strings/GPUs are not using them. They just keep releasing very similar, mostly bog standard transformers models over and over again, bar a tiny expense for a little experiment here and there. In other words, it’s full corporate: tiny, guaranteed incremental improvements without changing much, and no sharing with each other. It’s hilariously inefficient.
Deepseek is what happens when a company is smart but resource constrained. An order of magnitude more efficient, and even their architecture was very conservative.
Good ideas are dime a dozen. Implementation is the game.
Universities may churn out great papers, but what matters is how well they can implement them. Private entities win at implementation.
-
I agree that it's editorialized compared to the very neutral way the survey puts it. That said, I think you also have to take into account how AI has been marketed by the industry.
They have been claiming AGI is right around the corner pretty much since chatGPT first came to market. It's often implied (e.g. you'll be able to replace workers with this) or they are more vague on timeline (e.g. OpenAI saying they believe their research will eventually lead to AGI).
With that context I think it's fair to editorialize to this being a dead-end, because even with billions of dollars being poured into this, they won't be able to deliver AGI on the timeline they are promising.
There are plenty of back-office ticket-processing jobs that can, and have been, replaced by current-gen AI.
-
I have been shouting this for years. Turing and Minsky were pretty up front about this when they dropped this line of research in like 1952, even lovelace predicted this would be bullshit back before the first computer had been built.
The fact nothing got optimized, and it still didn't collapse, after deepseek? kind of gave the whole game away. there's something else going on here. this isn't about the technology, because there is no meaningful technology here.
Why didn't you drop the quotes from Turing, Minsky, and Lovelace?
-
Why didn't you drop the quotes from Turing, Minsky, and Lovelace?
because finding the specific stuff they said, which was in lovelace's case very broad/vague, and in turing+minsky's cases, far too technical for anyone with sam altman's dick in their mouth to understand, sounds like actual work. if you're genuinely curious, you can look up what they had to say. if you're just here to argue for this shit, you're not worth the effort.
-
Good ideas are dime a dozen. Implementation is the game.
Universities may churn out great papers, but what matters is how well they can implement them. Private entities win at implementation.
The corporate implementations are mostly crap though. With a few exceptions.
What’s needed is better “glue” in the middle. Larger entities integrating ideas from a bunch of standalone papers, out in the open, so they actually work together instead of mostly fading out of memory while the big implementations never even know they existed.
-
This post did not contain any content.
I went to CES this year and I sat on a few ai panels. This is actually not far off. Some said yah this is right but multiple panels I went to said that this is a dead end, and while usefull they are starting down different paths.
Its not bad, just we are finding it's nor great.
-
I used to support an IVA cluster. Now the only thing I use AI for is voice controls to set timers on my phone.
I use chatgpt daily in my business. But I use it more as a guide then a real replacement.
-
Yes, and maybe finding information right in front of them, and nothing more
Analyzing text from a different point of view than your own. I call that "synthetic second opinion"
-
That may be true technologically. But if the economics don't add up it's a bubble.
Even the open models released today you can run on your own can boost your productivity massively if you know what you’re doing. Most people here are just too daft to know what they’re doing and parrot whatever shite memes have told them to think.
-
Ya about as revolutionary as my left nut
Does your left nut give people 20:10 vision? Because AI already is. Can it detect cancer before a human can? Is it accelerating fighting antibiotic resistance, protein synthesis, and testing new medications?
Shut the fuck up you clueless eejit.
-
Does your left nut give people 20:10 vision? Because AI already is. Can it detect cancer before a human can? Is it accelerating fighting antibiotic resistance, protein synthesis, and testing new medications?
Shut the fuck up you clueless eejit.
Does your left nut give people 20:10 vision? Because AI already is. Can it detect cancer before a human can? Is it accelerating fighting antibiotic resistance, protein synthesis, and testing new medications?
Yes. Believe it or not my left nut can do those things.
-
Why would you need anyone to buy your products when you can just enjoy them yourself?
Because there's always a bigger fish out there to get you. Or that's what trillionaires will tell themselves when they wage a robotic war. This system isn't made to last the way it's progressing right now.
-
This post did not contain any content.
I'm a software developer and I know that AI is just the shiny new toy from which everyone uses the buzzword to generate investment revenue.
99% of the crap people use it for us worthless. It's just a hammer and everything is a nail.
It's just like "the cloud" was 10 years ago. Now everyone is back-pedaling from that because it didn't turn out to be the panacea that was promised.