Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End

Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End

Scheduled Pinned Locked Moved Technology
technology
183 Posts 101 Posters 10 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • L [email protected]

    Around a year ago I bet a friend $100 we won't have AGI by 2029, and I'd do the same today. LLMs are nothing more than fancy predictive text and are incapable of thinking or reasoning. We burn through immense amounts of compute and terabytes of data to train them, then stick them together in a convoluted mess, only to end up with something that's still dumber than the average human. In comparison humans are "trained" with maybe ten thousand "tokens" and ten megajoules of energy a day for a decade or two, and take only a couple dozen watts for even the most complex thinking.

    P This user is from outside of this forum
    P This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #117

    Humans are “trained” with maybe ten thousand “tokens” per day

    Uhhh... you may wanna rerun those numbers.

    It's waaaaaaaay more than that lol.

    and take only a couple dozen watts for even the most complex thinking

    Mate's literally got smoke coming out if his ears lol.

    A single Wh is 860 calories...

    I think you either have no idea wtf you are talking about, or your just made up a bunch of extremely wrong numbers to try and look smart.

    1. Humans will encounter hundreds of thousands of tokens per day, ramping up to millions in school.

    2. An human, by my estimate, has burned about 13,000 Wh by the time they reach adulthood. Maybe more depending in activity levels.

    3. While yes, an AI costs substantially more Wh, it also is done in weeks so it's obviously going to be way less energy efficient due to the exponential laws of resistance. If we grew a functional human in like 2 months it'd prolly require way WAY more than 13,000 Wh during the process for similiar reasons.

    4. Once trained, a single model can be duplicated infinitely. So it'd be more fair to compare how much millions of people cost to raise, compared to a single model to be trained. Because once trained, you can now make millions of copies of it...

    5. Operating costs are continuing to go down and down and down. Diffusion based text generation just made another huge leap forward, reporting around a twenty times efficiency increase over traditional gpt style LLMs. Improvements like this are coming out every month.

    1 Reply Last reply
    0
    • S [email protected]

      They really did that themselves.

      P This user is from outside of this forum
      P This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #118

      You hate ai or not, maybe you just found one more excuse to be an asshole online, don't know, don't care, bye.

      S 1 Reply Last reply
      0
      • cm0002@lemmy.worldC [email protected]
        This post did not contain any content.
        ? Offline
        ? Offline
        Guest
        wrote on last edited by
        #119

        I have been shouting this for years. Turing and Minsky were pretty up front about this when they dropped this line of research in like 1952, even lovelace predicted this would be bullshit back before the first computer had been built.

        The fact nothing got optimized, and it still didn't collapse, after deepseek? kind of gave the whole game away. there's something else going on here. this isn't about the technology, because there is no meaningful technology here.

        H ? 7 3 Replies Last reply
        0
        • cm0002@lemmy.worldC [email protected]
          This post did not contain any content.
          iavicenna@lemmy.worldI This user is from outside of this forum
          iavicenna@lemmy.worldI This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #120

          The funny thing is with so much money you could probably do lots of great stuff with the existing AI as it is. Instead they put all the money into compute power so that they can overfit their LLMs to look like a human.

          1 Reply Last reply
          0
          • P [email protected]

            You hate ai or not, maybe you just found one more excuse to be an asshole online, don't know, don't care, bye.

            S This user is from outside of this forum
            S This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #121

            You seem to enjoy continuing to engage.

            1 Reply Last reply
            0
            • itslilith@lemmy.blahaj.zoneI [email protected]

              No, 2,50€ is 2€ and 50ct, 2.50€ is wrong in this system. 2,500€ is also wrong (for currency, where you only care for two digits after the comma), 2.500€ is 2500€

              D This user is from outside of this forum
              D This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #122

              what if you are displaying a live bill for a service billed monthly, like bandwidth, and are charged one pence/cent/(whatever eutopes hundredth is called) per gigabyte if you use a few megabytes the bill is less than a hundredth but still exists.

              itslilith@lemmy.blahaj.zoneI 1 Reply Last reply
              0
              • L [email protected]

                You're confusing ai art with actual art, like rendered from illustration and paintings

                D This user is from outside of this forum
                D This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #123

                it's as much "real" art as photography, taking a relatively finite number of decisions and finding something that looks "good".

                spankmonkey@lemmy.worldS 1 Reply Last reply
                0
                • D [email protected]

                  what if you are displaying a live bill for a service billed monthly, like bandwidth, and are charged one pence/cent/(whatever eutopes hundredth is called) per gigabyte if you use a few megabytes the bill is less than a hundredth but still exists.

                  itslilith@lemmy.blahaj.zoneI This user is from outside of this forum
                  itslilith@lemmy.blahaj.zoneI This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #124

                  Yes, that's true, but more of an edge case. Something like gasoline is commonly priced in fractional cents, tho:

                  1 Reply Last reply
                  0
                  • P [email protected]

                    Right, simply scaling won’t lead to AGI, there will need to be some algorithmic changes. But nobody in the world knows what those are yet. Is it a simple framework on top of LLMs like the “atom of thought” paper? Or are transformers themselves a dead end? Or is multimodality the secret to AGI? I don’t think anyone really knows.

                    R This user is from outside of this forum
                    R This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #125

                    No there's some ideas out there. Concepts like heirarchical reinforcement learning are more likely to lead to AGI with creation of foundational policies, problem is as it stands, it's a really difficult technique to use so it isn't used often. And LLMs have sucked all the research dollars out of any other ideas.

                    1 Reply Last reply
                    0
                    • deegeese@sopuli.xyzD [email protected]

                      Optimizing AI performance by “scaling” is lazy and wasteful.

                      Reminds me of back in the early 2000s when someone would say don’t worry about performance, GHz will always go up.

                      morrowind@lemmy.mlM This user is from outside of this forum
                      morrowind@lemmy.mlM This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #126

                      It always wins in the end though. Look up the bitter lesson.

                      1 Reply Last reply
                      0
                      • S [email protected]

                        I agree that it's editorialized compared to the very neutral way the survey puts it. That said, I think you also have to take into account how AI has been marketed by the industry.

                        They have been claiming AGI is right around the corner pretty much since chatGPT first came to market. It's often implied (e.g. you'll be able to replace workers with this) or they are more vague on timeline (e.g. OpenAI saying they believe their research will eventually lead to AGI).

                        With that context I think it's fair to editorialize to this being a dead-end, because even with billions of dollars being poured into this, they won't be able to deliver AGI on the timeline they are promising.

                        morrowind@lemmy.mlM This user is from outside of this forum
                        morrowind@lemmy.mlM This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #127

                        Part of it is we keep realizing AGI is a lot more broader and more complex than we think

                        1 Reply Last reply
                        0
                        • B [email protected]

                          I remember listening to a podcast that’s about explaining stuff according to what we know today (scientifically). The guy explaining is just so knowledgeable about this stuff and he does his research and talk to experts when the subject involves something he isn’t himself an expert.

                          There was this episode where he kinda got into the topic of how technology only evolves with science (because you need to understand the stuff you’re doing and you need a theory of how it works before you make new assumptions and test those assumptions). He gave an example of the Apple visionPro being a machine that despite being new (the hardware capabilities, at least), the algorithm for tracking eyes they use was developed decades ago and was already well understood and proven correct by other applications.

                          So his point in the episode is that real innovation just can’t be rushed by throwing money or more people at a problem. Because real innovation takes real scientists having novel insights and experiments to expand the knowledge we have. Sometimes those insights are completely random, often you need to have a whole career in that field and sometimes it takes a new genius to revolutionize it (think Newton and Einstein).

                          Even the current wave of LLMs are simply a product of the Google’s paper that showed we could parallelize language models, leading to the creation of “larger language models”. That was Google doing science. But you can’t control when some new breakthrough is discovered, and LLMs are subject to this constraint.

                          In fact, the only practice we know that actually accelerates science is the collaboration of scientists around the world, the publishing of reproducible papers so that others can expand upon and have insights you didn’t even think about, and so on.

                          morrowind@lemmy.mlM This user is from outside of this forum
                          morrowind@lemmy.mlM This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #128

                          There's been several smaller breakthroughs since then that arguably would not have happened without so many scientists suddenly turning their attention to the field.

                          1 Reply Last reply
                          0
                          • N [email protected]

                            Imo to make an ai that is truly good at everything we need to have multiple ai all designed to do something different all working together (like the human brain works) instead of making every single ai a personality-less sludge of jack of all trades master of none

                            morrowind@lemmy.mlM This user is from outside of this forum
                            morrowind@lemmy.mlM This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #129

                            Lots of people think this. They keep turning out wrong. Look up the bitter lesson

                            1 Reply Last reply
                            0
                            • tetris11@lemmy.mlT [email protected]

                              I like my project manager, they find me work, ask how I'm doing and talk straight.

                              It's when the CEO/CTO/CFO speaks where my eyes glaze over, my mouth sags, and I bounce my neck at prompted intervals as my brain retreats into itself as it frantically tosses words and phrases into the meaning grinder and cranks the wheel, only for nothing to come out of it time and time again.

                              S This user is from outside of this forum
                              S This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #130

                              Find a better C-suite

                              1 Reply Last reply
                              0
                              • cm0002@lemmy.worldC [email protected]
                                This post did not contain any content.
                                ? Offline
                                ? Offline
                                Guest
                                wrote on last edited by
                                #131

                                Good, let them go broke in the pursuit of a dead end.

                                1 Reply Last reply
                                0
                                • H [email protected]

                                  Says the country where every science textbook is half science half conversion tables.

                                  S This user is from outside of this forum
                                  S This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #132

                                  Not even close.

                                  Yes, one half is conversion tables. The other half is scripture disproving Darwinism.

                                  1 Reply Last reply
                                  0
                                  • T [email protected]

                                    As an experienced software dev I'm convinced my software quality has improved by using AI. More time for thinking and less time for execution means I can make more iterations of the design and don't have to skip as many nice-to-haves or unit tests on account of limited time. It's not like I don't go through every code line multiple times anyway, I don't just blindly accept code. As a bonus I can ask the AI to review the code and produce documentation. By the time I'm done there's little left of what was originally generated.

                                    G This user is from outside of this forum
                                    G This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #133

                                    If a bot can develop your software better than you then you're a shit software dev

                                    T 1 Reply Last reply
                                    0
                                    • P [email protected]

                                      I am indeed getting more time off for PD

                                      We delivered on a project 2 weeks ahead of schedule so we were given raises, I got a promotion, and we were given 2 weeks to just do some chill PD at our own discretion as a reward. All paid on the clock.

                                      Some companies are indeed pretty cool about it.

                                      I was asked to give some demos and do some chats with folks to spread info on how we had such success, and they were pretty fond of my methodology.

                                      At its core delivering faster does translate to getting bigger bonuses and kickbacks at my company, so yeah there's actual financial incentive for me to perform way better.

                                      You also are ignoring the stress thing. If I can work 3x better, I can also just deliver in almost the same time, but spend all that freed up time instead focusing on quality, polishing the product up, documentation, double checking my work, testing, etc.

                                      Instead of scraping past the deadline by the skin of our teeth, we hit the deadline with a week or 2 to spare and spent a buncha extra time going over everything with a fine tooth comb twice to make sure we didn't miss anything.

                                      And instead of mad rushing 8 hours straight, it's just generally more casual. I can take it slower and do the same work but just in a less stressed out way. So I'm literally just physically working less hard, I feel happier, and overall my mood is way better, and I have way more energy.

                                      G This user is from outside of this forum
                                      G This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #134

                                      Are you a software engineer? Without doxxing yourself, do you think you could share some more info or guidance? I've personally been trying to integrate AI code gen into my own work, but haven't had much success.

                                      I've been able to ask ChatGPT to generate some simple but tedious code that would normally require me read through a bunch of documentation. Usually, that's a third party library or a part of the standard library I'm not familiar with. My work is mostly Python and C++, and I've found that ChatGPT is terrible at C++ and more often than not generates code that doesn't even compile. It is very good at generating Python by comparison, but unfortunately for me, that's only like 10% of my work.

                                      For C++, I've found it helpful to ask misc questions about the design of the STL or new language features while I'm studying them myself. It's not actually generating any code, but it definitely saves me some time. It's very useful for translating C++'s "standardese" into english, for example. It still struggles generating valid code using C++20 or newer though.

                                      I also tried a few local models on my GPU, but haven't had good results. I assume it's a problem with the models I used not being optimized for code, or maybe the inference tools I tried weren't using them right (oobabooga, kobold, and some others I don't remember). If you have any recommendations for good coding models I can run locally on a 4090, I'd love to hear them!

                                      I tried using a few of those AI code editors (mostly VS Code plugins) years ago, and they really sucked. I'm sure things have improved since then, so maybe that's the way to go?

                                      P 1 Reply Last reply
                                      0
                                      • cm0002@lemmy.worldC [email protected]
                                        This post did not contain any content.
                                        daggermoon@lemmy.worldD This user is from outside of this forum
                                        daggermoon@lemmy.worldD This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #135

                                        Why won't they pour billions into me? I'd actually put it to good use.

                                        a_random_idiot@lemmy.worldA 1 Reply Last reply
                                        0
                                        • daggermoon@lemmy.worldD [email protected]

                                          Why won't they pour billions into me? I'd actually put it to good use.

                                          a_random_idiot@lemmy.worldA This user is from outside of this forum
                                          a_random_idiot@lemmy.worldA This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #136

                                          I'd be happy with a couple hundos.

                                          daggermoon@lemmy.worldD 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups