Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End
-
Around a year ago I bet a friend $100 we won't have AGI by 2029, and I'd do the same today. LLMs are nothing more than fancy predictive text and are incapable of thinking or reasoning. We burn through immense amounts of compute and terabytes of data to train them, then stick them together in a convoluted mess, only to end up with something that's still dumber than the average human. In comparison humans are "trained" with maybe ten thousand "tokens" and ten megajoules of energy a day for a decade or two, and take only a couple dozen watts for even the most complex thinking.
Humans are “trained” with maybe ten thousand “tokens” per day
Uhhh... you may wanna rerun those numbers.
It's waaaaaaaay more than that lol.
and take only a couple dozen watts for even the most complex thinking
Mate's literally got smoke coming out if his ears lol.
A single
Wh
is 860 calories...I think you either have no idea wtf you are talking about, or your just made up a bunch of extremely wrong numbers to try and look smart.
-
Humans will encounter hundreds of thousands of tokens per day, ramping up to millions in school.
-
An human, by my estimate, has burned about 13,000 Wh by the time they reach adulthood. Maybe more depending in activity levels.
-
While yes, an AI costs substantially more
Wh
, it also is done in weeks so it's obviously going to be way less energy efficient due to the exponential laws of resistance. If we grew a functional human in like 2 months it'd prolly require way WAY more than 13,000Wh
during the process for similiar reasons. -
Once trained, a single model can be duplicated infinitely. So it'd be more fair to compare how much millions of people cost to raise, compared to a single model to be trained. Because once trained, you can now make millions of copies of it...
-
Operating costs are continuing to go down and down and down. Diffusion based text generation just made another huge leap forward, reporting around a twenty times efficiency increase over traditional gpt style LLMs. Improvements like this are coming out every month.
-
-
They really did that themselves.
You hate ai or not, maybe you just found one more excuse to be an asshole online, don't know, don't care, bye.
-
This post did not contain any content.
I have been shouting this for years. Turing and Minsky were pretty up front about this when they dropped this line of research in like 1952, even lovelace predicted this would be bullshit back before the first computer had been built.
The fact nothing got optimized, and it still didn't collapse, after deepseek? kind of gave the whole game away. there's something else going on here. this isn't about the technology, because there is no meaningful technology here.
-
This post did not contain any content.
The funny thing is with so much money you could probably do lots of great stuff with the existing AI as it is. Instead they put all the money into compute power so that they can overfit their LLMs to look like a human.
-
You hate ai or not, maybe you just found one more excuse to be an asshole online, don't know, don't care, bye.
You seem to enjoy continuing to engage.
-
No, 2,50€ is 2€ and 50ct, 2.50€ is wrong in this system. 2,500€ is also wrong (for currency, where you only care for two digits after the comma), 2.500€ is 2500€
what if you are displaying a live bill for a service billed monthly, like bandwidth, and are charged one pence/cent/(whatever eutopes hundredth is called) per gigabyte if you use a few megabytes the bill is less than a hundredth but still exists.
-
You're confusing ai art with actual art, like rendered from illustration and paintings
it's as much "real" art as photography, taking a relatively finite number of decisions and finding something that looks "good".
-
what if you are displaying a live bill for a service billed monthly, like bandwidth, and are charged one pence/cent/(whatever eutopes hundredth is called) per gigabyte if you use a few megabytes the bill is less than a hundredth but still exists.
Yes, that's true, but more of an edge case. Something like gasoline is commonly priced in fractional cents, tho:
-
Right, simply scaling won’t lead to AGI, there will need to be some algorithmic changes. But nobody in the world knows what those are yet. Is it a simple framework on top of LLMs like the “atom of thought” paper? Or are transformers themselves a dead end? Or is multimodality the secret to AGI? I don’t think anyone really knows.
No there's some ideas out there. Concepts like heirarchical reinforcement learning are more likely to lead to AGI with creation of foundational policies, problem is as it stands, it's a really difficult technique to use so it isn't used often. And LLMs have sucked all the research dollars out of any other ideas.
-
Optimizing AI performance by “scaling” is lazy and wasteful.
Reminds me of back in the early 2000s when someone would say don’t worry about performance, GHz will always go up.
It always wins in the end though. Look up the bitter lesson.
-
I agree that it's editorialized compared to the very neutral way the survey puts it. That said, I think you also have to take into account how AI has been marketed by the industry.
They have been claiming AGI is right around the corner pretty much since chatGPT first came to market. It's often implied (e.g. you'll be able to replace workers with this) or they are more vague on timeline (e.g. OpenAI saying they believe their research will eventually lead to AGI).
With that context I think it's fair to editorialize to this being a dead-end, because even with billions of dollars being poured into this, they won't be able to deliver AGI on the timeline they are promising.
Part of it is we keep realizing AGI is a lot more broader and more complex than we think
-
I remember listening to a podcast that’s about explaining stuff according to what we know today (scientifically). The guy explaining is just so knowledgeable about this stuff and he does his research and talk to experts when the subject involves something he isn’t himself an expert.
There was this episode where he kinda got into the topic of how technology only evolves with science (because you need to understand the stuff you’re doing and you need a theory of how it works before you make new assumptions and test those assumptions). He gave an example of the Apple visionPro being a machine that despite being new (the hardware capabilities, at least), the algorithm for tracking eyes they use was developed decades ago and was already well understood and proven correct by other applications.
So his point in the episode is that real innovation just can’t be rushed by throwing money or more people at a problem. Because real innovation takes real scientists having novel insights and experiments to expand the knowledge we have. Sometimes those insights are completely random, often you need to have a whole career in that field and sometimes it takes a new genius to revolutionize it (think Newton and Einstein).
Even the current wave of LLMs are simply a product of the Google’s paper that showed we could parallelize language models, leading to the creation of “larger language models”. That was Google doing science. But you can’t control when some new breakthrough is discovered, and LLMs are subject to this constraint.
In fact, the only practice we know that actually accelerates science is the collaboration of scientists around the world, the publishing of reproducible papers so that others can expand upon and have insights you didn’t even think about, and so on.
There's been several smaller breakthroughs since then that arguably would not have happened without so many scientists suddenly turning their attention to the field.
-
Imo to make an ai that is truly good at everything we need to have multiple ai all designed to do something different all working together (like the human brain works) instead of making every single ai a personality-less sludge of jack of all trades master of none
Lots of people think this. They keep turning out wrong. Look up the bitter lesson
-
I like my project manager, they find me work, ask how I'm doing and talk straight.
It's when the CEO/CTO/CFO speaks where my eyes glaze over, my mouth sags, and I bounce my neck at prompted intervals as my brain retreats into itself as it frantically tosses words and phrases into the meaning grinder and cranks the wheel, only for nothing to come out of it time and time again.
Find a better C-suite
-
This post did not contain any content.
-
Says the country where every science textbook is half science half conversion tables.
Not even close.
Yes, one half is conversion tables. The other half is scripture disproving Darwinism.
-
As an experienced software dev I'm convinced my software quality has improved by using AI. More time for thinking and less time for execution means I can make more iterations of the design and don't have to skip as many nice-to-haves or unit tests on account of limited time. It's not like I don't go through every code line multiple times anyway, I don't just blindly accept code. As a bonus I can ask the AI to review the code and produce documentation. By the time I'm done there's little left of what was originally generated.
If a bot can develop your software better than you then you're a shit software dev
-
I am indeed getting more time off for PD
We delivered on a project 2 weeks ahead of schedule so we were given raises, I got a promotion, and we were given 2 weeks to just do some chill PD at our own discretion as a reward. All paid on the clock.
Some companies are indeed pretty cool about it.
I was asked to give some demos and do some chats with folks to spread info on how we had such success, and they were pretty fond of my methodology.
At its core delivering faster does translate to getting bigger bonuses and kickbacks at my company, so yeah there's actual financial incentive for me to perform way better.
You also are ignoring the stress thing. If I can work 3x better, I can also just deliver in almost the same time, but spend all that freed up time instead focusing on quality, polishing the product up, documentation, double checking my work, testing, etc.
Instead of scraping past the deadline by the skin of our teeth, we hit the deadline with a week or 2 to spare and spent a buncha extra time going over everything with a fine tooth comb twice to make sure we didn't miss anything.
And instead of mad rushing 8 hours straight, it's just generally more casual. I can take it slower and do the same work but just in a less stressed out way. So I'm literally just physically working less hard, I feel happier, and overall my mood is way better, and I have way more energy.
Are you a software engineer? Without doxxing yourself, do you think you could share some more info or guidance? I've personally been trying to integrate AI code gen into my own work, but haven't had much success.
I've been able to ask ChatGPT to generate some simple but tedious code that would normally require me read through a bunch of documentation. Usually, that's a third party library or a part of the standard library I'm not familiar with. My work is mostly Python and C++, and I've found that ChatGPT is terrible at C++ and more often than not generates code that doesn't even compile. It is very good at generating Python by comparison, but unfortunately for me, that's only like 10% of my work.
For C++, I've found it helpful to ask misc questions about the design of the STL or new language features while I'm studying them myself. It's not actually generating any code, but it definitely saves me some time. It's very useful for translating C++'s "standardese" into english, for example. It still struggles generating valid code using C++20 or newer though.
I also tried a few local models on my GPU, but haven't had good results. I assume it's a problem with the models I used not being optimized for code, or maybe the inference tools I tried weren't using them right (oobabooga, kobold, and some others I don't remember). If you have any recommendations for good coding models I can run locally on a 4090, I'd love to hear them!
I tried using a few of those AI code editors (mostly VS Code plugins) years ago, and they really sucked. I'm sure things have improved since then, so maybe that's the way to go?
-
This post did not contain any content.
Why won't they pour billions into me? I'd actually put it to good use.
-
Why won't they pour billions into me? I'd actually put it to good use.
I'd be happy with a couple hundos.