Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End

Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End

Scheduled Pinned Locked Moved Technology
technology
183 Posts 101 Posters 10 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • zos_kia@lemmynsfw.comZ [email protected]

    Yeah he should be using real art like stock photos and shitty clip art

    spankmonkey@lemmy.worldS This user is from outside of this forum
    spankmonkey@lemmy.worldS This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #101

    If his business can't afford to pay someone qualified to do the work, the business shouldn't exist.

    H 1 Reply Last reply
    0
    • cm0002@lemmy.worldC [email protected]
      This post did not contain any content.
      B This user is from outside of this forum
      B This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #102

      It's ironic how conservative the spending actually is.

      Awesome ML papers and ideas come out every week. Low power training/inference optimizations, fundamental changes in the math like bitnet, new attention mechanisms, cool tools to make models more controllable and steerable and grounded. This is all getting funded, right?

      No.

      Universities and such are putting out all this research, but the big model trainers holding the purse strings/GPUs are not using them. They just keep releasing very similar, mostly bog standard transformers models over and over again, bar a tiny expense for a little experiment here and there. In other words, it’s full corporate: tiny, guaranteed incremental improvements without changing much, and no sharing with each other. It’s hilariously inefficient.

      Deepseek is what happens when a company is smart but resource constrained. An order of magnitude more efficient, and even their architecture was very conservative.

      B S 2 Replies Last reply
      0
      • S [email protected]

        I agree that it's editorialized compared to the very neutral way the survey puts it. That said, I think you also have to take into account how AI has been marketed by the industry.

        They have been claiming AGI is right around the corner pretty much since chatGPT first came to market. It's often implied (e.g. you'll be able to replace workers with this) or they are more vague on timeline (e.g. OpenAI saying they believe their research will eventually lead to AGI).

        With that context I think it's fair to editorialize to this being a dead-end, because even with billions of dollars being poured into this, they won't be able to deliver AGI on the timeline they are promising.

        J This user is from outside of this forum
        J This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #103

        Yeah, it does some tricks, some of them even useful, but the investment is not for the demonstrated capability or realistic extrapolation of that, it is for the sort of product like OpenAI is promising equivalent to a full time research assistant for 20k a month. Which is way more expensive than an actual research assistant, but that's not stopping them from making the pitch.

        1 Reply Last reply
        0
        • N [email protected]

          The actual survey result:

          Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed.

          So they're not saying the entire industry is a dead end, or even that the newest phase is. They're just saying they don't think this current technology will make AGI when scaled. I think most people agree, including the investors pouring billions into this. They arent betting this will turn to agi, they're betting that they have some application for the current ai. Are some of those applications dead ends, most definitely, are some of them revolutionary, maybe

          Thus would be like asking a researcher in the 90s that if they scaled up the bandwidth and computing power of the average internet user would we see a vastly connected media sharing network, they'd probably say no. It took more than a decade of software, cultural and societal development to discover the applications for the internet.

          P This user is from outside of this forum
          P This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #104

          Right, simply scaling won’t lead to AGI, there will need to be some algorithmic changes. But nobody in the world knows what those are yet. Is it a simple framework on top of LLMs like the “atom of thought” paper? Or are transformers themselves a dead end? Or is multimodality the secret to AGI? I don’t think anyone really knows.

          R 1 Reply Last reply
          0
          • S [email protected]

            But from a grammatical sense it’s the opposite. In a sentence, a comma is a short pause, while a period is a hard stop. That means it makes far more sense for the comma to be the thousands separator and the period to be the stop between integer and fraction.

            itslilith@lemmy.blahaj.zoneI This user is from outside of this forum
            itslilith@lemmy.blahaj.zoneI This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #105

            I have no strong preference either way. I think both are valid and sensible systems, and it's only confusing because of competing standards. I think over long enough time, due to the internet, the period as the decimal separator will prevail, but it's gonna happen normally, it's not something we can force. Many young people I know already use it that way here in Germany

            1 Reply Last reply
            0
            • N [email protected]

              The actual survey result:

              Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed.

              So they're not saying the entire industry is a dead end, or even that the newest phase is. They're just saying they don't think this current technology will make AGI when scaled. I think most people agree, including the investors pouring billions into this. They arent betting this will turn to agi, they're betting that they have some application for the current ai. Are some of those applications dead ends, most definitely, are some of them revolutionary, maybe

              Thus would be like asking a researcher in the 90s that if they scaled up the bandwidth and computing power of the average internet user would we see a vastly connected media sharing network, they'd probably say no. It took more than a decade of software, cultural and societal development to discover the applications for the internet.

              1 This user is from outside of this forum
              1 This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #106

              I think most people agree, including the investors pouring billions into this.

              The same investors that poured (and are still pouring) billions into crypto, and invested in sub-prime loans and valued pets.com at $300M? I don't see any way the companies will be able to recoup the costs of their investment in "AI" datacenters (i.e. the $500B Stargate or $80B Microsoft; probably upwards of a trillion dollars globally invested in these data-centers).

              1 Reply Last reply
              0
              • N [email protected]

                The actual survey result:

                Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed.

                So they're not saying the entire industry is a dead end, or even that the newest phase is. They're just saying they don't think this current technology will make AGI when scaled. I think most people agree, including the investors pouring billions into this. They arent betting this will turn to agi, they're betting that they have some application for the current ai. Are some of those applications dead ends, most definitely, are some of them revolutionary, maybe

                Thus would be like asking a researcher in the 90s that if they scaled up the bandwidth and computing power of the average internet user would we see a vastly connected media sharing network, they'd probably say no. It took more than a decade of software, cultural and societal development to discover the applications for the internet.

                F This user is from outside of this forum
                F This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #107

                The bigger loss is the ENORMOUS amounts of energy required to train these models. Training an AI can use up more than half the entire output of the average nuclear plant.

                AI data centers also generate a ton of CO². For example, training an AI produces more CO² than a 55 year old human has produced since birth.

                Complete waste.

                1 Reply Last reply
                0
                • tetris11@lemmy.mlT [email protected]

                  I like my project manager, they find me work, ask how I'm doing and talk straight.

                  It's when the CEO/CTO/CFO speaks where my eyes glaze over, my mouth sags, and I bounce my neck at prompted intervals as my brain retreats into itself as it frantically tosses words and phrases into the meaning grinder and cranks the wheel, only for nothing to come out of it time and time again.

                  G This user is from outside of this forum
                  G This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #108

                  I just turn of my camera and turn on Forza Motorsport or something like that

                  1 Reply Last reply
                  0
                  • tetris11@lemmy.mlT [email protected]

                    I like my project manager, they find me work, ask how I'm doing and talk straight.

                    It's when the CEO/CTO/CFO speaks where my eyes glaze over, my mouth sags, and I bounce my neck at prompted intervals as my brain retreats into itself as it frantically tosses words and phrases into the meaning grinder and cranks the wheel, only for nothing to come out of it time and time again.

                    S This user is from outside of this forum
                    S This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #109

                    The number of times my CTO says we're going to do THING, only to have to be told that this isn't how things work...

                    1 Reply Last reply
                    0
                    • _cnt0@sh.itjust.works_ [email protected]

                      As an experienced software dev I'm convinced my software quality has improved by using AI.

                      Then your software quality was extreme shit before. It's still shit, but an improvement. So, yay "AI", I guess?

                      T This user is from outside of this forum
                      T This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #110

                      That seems like just wishful thinking on your part, or maybe you haven't learned how to use these tools properly.

                      _cnt0@sh.itjust.works_ 1 Reply Last reply
                      0
                      • P [email protected]

                        You are insulting a person, because they said ai helps them.

                        S This user is from outside of this forum
                        S This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #111

                        They really did that themselves.

                        P 1 Reply Last reply
                        0
                        • T [email protected]

                          That seems like just wishful thinking on your part, or maybe you haven't learned how to use these tools properly.

                          _cnt0@sh.itjust.works_ This user is from outside of this forum
                          _cnt0@sh.itjust.works_ This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #112

                          Na, the tools suck. I'm not using a rubber hammer to get woodscrews into concrete and I'm not using "AI" for something that requires a brain. I've looked at "AI" suggestions for coding and it was >95% garbage. If "AI" makes someone a better coder it tells more about that someone than "AI".

                          T 1 Reply Last reply
                          0
                          • A [email protected]

                            I want to believe that commoditization of AI will happen as you describe, with AI made by devs for devs.
                            So far what I see is "developer productivity is now up and 1 dev can do the work of 3? Good, fire 2 devs out of 3. Or you know what? Make it 5 out of 6, because the remaining ones should get used to working 60 hours/week."

                            All that increased dev capacity needs to translate into new useful products. Right now the "new useful product" that all energies are poured into is... AI itself. Or even worse, shoehorning "AI-powered" features in all existing product, whether it makes sense or not (welcome, AI features in MS Notepad!). Once this masturbatory stage is over and the dust settles, I'm pretty confident that something new and useful will remain but for now the level of hype is tremendous!

                            P This user is from outside of this forum
                            P This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #113

                            Good, fire 2 devs out of 3.

                            Companies that do this will fail.

                            Successful companies respond to this by hiring more developers.

                            Consider the taxi cab driver:

                            With the invention if the automobile, cab drivers could do their job way faster and way cheaper.

                            Did companies fire drivers in response? God no. They hired more

                            Why?

                            Because they became more affordable, less wealthy clients could now afford their services which means demand went way way up

                            If you can do your work for half the cost, usually demand goes up by way more than x2 because as you go down in wealth levels of target demographics, your pool of clients exponentially grows

                            If I go from "it costs me 100k to make you a website" to "it costs me 50k to make you a website" my pool of possible clients more than doubles

                            Which means... you need to hire more devs asap to start matching this newfound level of demand

                            If you fire devs when your demand is about to skyrocket, you fucked up bad lol

                            1 Reply Last reply
                            0
                            • speculater@lemmy.worldS [email protected]

                              I think the human in the loop currently needs to know what the LLM produced or checked, but they'll get better.

                              P This user is from outside of this forum
                              P This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #114

                              For sure, much like how a cab driver has to know how to drive a cab.

                              AI is absolutely a "garbage in, garbage out" tool. Just having it doesn't automatically make you good at your job.

                              The difference in someone who can weild it well vs someone who has no idea what they are doing is palpable.

                              1 Reply Last reply
                              0
                              • P [email protected]

                                We are having massive exponential increases in output with all sorts of innovations, every few weeks another big step forward happens

                                https://youtu.be/d4tMzagjXrI

                                L This user is from outside of this forum
                                L This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #115

                                Around a year ago I bet a friend $100 we won't have AGI by 2029, and I'd do the same today. LLMs are nothing more than fancy predictive text and are incapable of thinking or reasoning. We burn through immense amounts of compute and terabytes of data to train them, then stick them together in a convoluted mess, only to end up with something that's still dumber than the average human. In comparison humans are "trained" with maybe ten thousand "tokens" and ten megajoules of energy a day for a decade or two, and take only a couple dozen watts for even the most complex thinking.

                                P 1 Reply Last reply
                                0
                                • _cnt0@sh.itjust.works_ [email protected]

                                  Na, the tools suck. I'm not using a rubber hammer to get woodscrews into concrete and I'm not using "AI" for something that requires a brain. I've looked at "AI" suggestions for coding and it was >95% garbage. If "AI" makes someone a better coder it tells more about that someone than "AI".

                                  T This user is from outside of this forum
                                  T This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #116

                                  Then try writing the code yourself and ask ChatGPT's o3-mini-high to critique your code (be sure to explain the context).

                                  Or ask it to produce unit tests - even if they're not perfect from the get go I promise you will save time by having a starting skeleton.

                                  1 Reply Last reply
                                  0
                                  • L [email protected]

                                    Around a year ago I bet a friend $100 we won't have AGI by 2029, and I'd do the same today. LLMs are nothing more than fancy predictive text and are incapable of thinking or reasoning. We burn through immense amounts of compute and terabytes of data to train them, then stick them together in a convoluted mess, only to end up with something that's still dumber than the average human. In comparison humans are "trained" with maybe ten thousand "tokens" and ten megajoules of energy a day for a decade or two, and take only a couple dozen watts for even the most complex thinking.

                                    P This user is from outside of this forum
                                    P This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #117

                                    Humans are “trained” with maybe ten thousand “tokens” per day

                                    Uhhh... you may wanna rerun those numbers.

                                    It's waaaaaaaay more than that lol.

                                    and take only a couple dozen watts for even the most complex thinking

                                    Mate's literally got smoke coming out if his ears lol.

                                    A single Wh is 860 calories...

                                    I think you either have no idea wtf you are talking about, or your just made up a bunch of extremely wrong numbers to try and look smart.

                                    1. Humans will encounter hundreds of thousands of tokens per day, ramping up to millions in school.

                                    2. An human, by my estimate, has burned about 13,000 Wh by the time they reach adulthood. Maybe more depending in activity levels.

                                    3. While yes, an AI costs substantially more Wh, it also is done in weeks so it's obviously going to be way less energy efficient due to the exponential laws of resistance. If we grew a functional human in like 2 months it'd prolly require way WAY more than 13,000 Wh during the process for similiar reasons.

                                    4. Once trained, a single model can be duplicated infinitely. So it'd be more fair to compare how much millions of people cost to raise, compared to a single model to be trained. Because once trained, you can now make millions of copies of it...

                                    5. Operating costs are continuing to go down and down and down. Diffusion based text generation just made another huge leap forward, reporting around a twenty times efficiency increase over traditional gpt style LLMs. Improvements like this are coming out every month.

                                    1 Reply Last reply
                                    0
                                    • S [email protected]

                                      They really did that themselves.

                                      P This user is from outside of this forum
                                      P This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #118

                                      You hate ai or not, maybe you just found one more excuse to be an asshole online, don't know, don't care, bye.

                                      S 1 Reply Last reply
                                      0
                                      • cm0002@lemmy.worldC [email protected]
                                        This post did not contain any content.
                                        ? Offline
                                        ? Offline
                                        Guest
                                        wrote on last edited by
                                        #119

                                        I have been shouting this for years. Turing and Minsky were pretty up front about this when they dropped this line of research in like 1952, even lovelace predicted this would be bullshit back before the first computer had been built.

                                        The fact nothing got optimized, and it still didn't collapse, after deepseek? kind of gave the whole game away. there's something else going on here. this isn't about the technology, because there is no meaningful technology here.

                                        H ? 7 3 Replies Last reply
                                        0
                                        • cm0002@lemmy.worldC [email protected]
                                          This post did not contain any content.
                                          iavicenna@lemmy.worldI This user is from outside of this forum
                                          iavicenna@lemmy.worldI This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #120

                                          The funny thing is with so much money you could probably do lots of great stuff with the existing AI as it is. Instead they put all the money into compute power so that they can overfit their LLMs to look like a human.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups