Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. LocalLLaMA
  3. How Hungry is AI? Benchmarking Energy, Water, and Carbon Footprint of LLM Inference

How Hungry is AI? Benchmarking Energy, Water, and Carbon Footprint of LLM Inference

Scheduled Pinned Locked Moved LocalLLaMA
localllama
17 Posts 6 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • smokeydope@lemmy.worldS [email protected]

    Good engineers are figuring out more energy/compute efficient ways to train models all the time. Part of the original deepseek hype was that they not only cooked a competitive model but did it with the fraction of energy/compute needed by their competion. On the local hosting side computer hardware isalso getting more energy efficient over time not only do graphics cards improve in speed but also they slowly reduce the amount of power needed for the compute.

    AI is a waste of energy

    It depends on where that energy is coming from, how that energy is used, and the bias of the person judging its usage. When the energy comes from renewable resources without burning more emmisions into the air, and computation used actually results in useful work being done to improve peoples daily lives I argue its worth the watt hours. Espcially in local context with devices that take less power than a kitchen appliance for inferencing.

    Greedy programmer type tech bros without a shred of respect for human creativity bragging about models taking away artist jobs couldn't create something with the purpose of helping anyone but themselves if their life depended on it. But society does run on software stacks and databases they create, so it can be argued llms spitting out functioning code and acting as local stack exchange is useful enough but that also gives birth to vibe coders who overly rely without being able to think for themselves.

    Besides the loudmouth silicon valley inhabitors though, Theres real work being done in private sectors you and I probably dont know about.

    My local college is researching the use of vision/image based models to examine billions of cancer cells to potentially identify new subtle patterns for screening. Is cancer research a waste of energy?

    I would one day like to prototype a way to make smart glasses useful for blind people by having a multimodal model look through the camera for them and transmit a description of what it sees through braille vibration pulses. Is prototyping accessibility tools for the disabled a waste of energy?

    trying to downplay this cancer on society is dangerous

    "Cancer on society" is hyperbole that reveals youre coming at us from a place of emotional antagonism. Its a tool, one with great potential if its used right. That responsibility is on us to make sure it gets used right. Right now its an expensive tool to create which is the biggest problem but

    1. Once its trained/ created it can be copied and shared indefinitely potentially for many thousands of years on the right mediums or with tradition.

    2. Trsining methods will improve efficiency wise through improvements to computational strategy or better materials used.

    3. As far as using and hosting the tool on the local level the same power draw as whatever device you use from a phone to a gaming desktop.

    In a slightly better timeline where people cared more about helping eachother than growing their own wealth and american mega corporations were held at least a little accountable by real government oversight then companies like meta/openAI would have gotten a real handslap for stealing copyright infringed data to train the original models and the tech bros would be interested in making real tools to help people in an energy efficient way.

    ai hit a wall

    Yes and no. Increasing parameter size past the current biggest models seems to not have big benchmark omprovements though there may be more subtle improvements in abilities not caputured with the test.

    The only really guilty of throwing energy and parameters at the wall hoping something would stick is meta with the latest llama4 release. Everyone else has sidestepped this by improving models with better fine tuning datasets, baking in chain of thought reasoning, multi modality (vision, hearing, text all in one). Theres still so many improvements being made in other ways even if just throwing parameters eventual peters out like a Moore's law.

    The world burned long before AI and even computers, and it will continue to burn long after. Most people are excessive, selfish, and wasteful by nature. Islands of trash in the ocean, the ozone layer being nearly destroyed for refigerantd and hair sprays, the icecaps melting, god knows how many tons of oil burned on cars or spilled in the oceans.

    Political environmentalist have done the math on just how much carbon, water, and materials were spent on every process born since the industrial revolutions. Spoilers, none of the numbers are good. Model training is just the latest thing to grasp onto for these kinds of people to play blame games with.

    softestsapphic@lemmy.worldS This user is from outside of this forum
    softestsapphic@lemmy.worldS This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #8

    This responce shows a lack of understanding of how this tech works.

    Fundamentally we are still on the same ML algos from the 90s

    There isn't any more gains to be had until we totally scrap our current approach and invent a new kind of ML that nobody has even started working on.

    Please stop treating a robot like a god. It's cringe.

    1 Reply Last reply
    2
    • W [email protected]

      AI usage is projected to outpace cities soon.

      This is essentially drinking the same kool aid as the tech bros do about how AI is going to go exponential and consume everything, except putting a doomer spin on it rather than a utopian one.

      Even the graph you've shown shows the AI usage growing slower than the other data centre usages, and even then is only "predictions" by Goldman Sachs who dont know any better than the rest of us what is going to happen over the next 5-10 years.

      softestsapphic@lemmy.worldS This user is from outside of this forum
      softestsapphic@lemmy.worldS This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #9

      The graph shows diminishing returns on capability despite using exponentially more energy required for those returns

      I get it reading is hard

      W 1 Reply Last reply
      1
      • eyekaytee@aussie.zoneE [email protected]

        Where is that chart from?

        If we were smart and responsible we would admit AI has hit a wall

        What wall has it hit?

        softestsapphic@lemmy.worldS This user is from outside of this forum
        softestsapphic@lemmy.worldS This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #10

        https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/

        eyekaytee@aussie.zoneE 1 Reply Last reply
        0
        • softestsapphic@lemmy.worldS [email protected]

          https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/

          eyekaytee@aussie.zoneE This user is from outside of this forum
          eyekaytee@aussie.zoneE This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #11

          you’re saying a wall has been hit based on a wired article 🤣

          i just watched my first ai movie

          https://m.youtube.com/watch?v=vtPcpWvAEt0

          3 years ago this was a tiny 5 second blurry mess

          i don’t know why you’re here, you’re clueless

          softestsapphic@lemmy.worldS 1 Reply Last reply
          1
          • eyekaytee@aussie.zoneE [email protected]

            you’re saying a wall has been hit based on a wired article 🤣

            i just watched my first ai movie

            https://m.youtube.com/watch?v=vtPcpWvAEt0

            3 years ago this was a tiny 5 second blurry mess

            i don’t know why you’re here, you’re clueless

            softestsapphic@lemmy.worldS This user is from outside of this forum
            softestsapphic@lemmy.worldS This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #12

            Im taking the CEO of Open AI at his word as a Computer Scientist

            Cope harder religous freak

            eyekaytee@aussie.zoneE 1 Reply Last reply
            0
            • softestsapphic@lemmy.worldS [email protected]

              The graph shows diminishing returns on capability despite using exponentially more energy required for those returns

              I get it reading is hard

              W This user is from outside of this forum
              W This user is from outside of this forum
              [email protected]
              wrote on last edited by [email protected]
              #13

              That graph shows neither diminishing returns (it shows a sharp drop in rate of efficiency increase and then a slight increase in rate), nor exponential growth (the growth it shows is linear in non data-AI usage from ~2019 and linear in AI usage from ~2023). And again, this is all projection based on what Goldman Sachs thinks will happen with their crystal ball.

              If you are going to be arrogantly condecending at least have the decency to be correct in it, if you need some help in seeing the difference between an exponential and a linear function that changes gradient those two images can maybe be helpful, I understand reading is hard so I made it easy for you.

              softestsapphic@lemmy.worldS 1 Reply Last reply
              5
              • softestsapphic@lemmy.worldS [email protected]

                Im taking the CEO of Open AI at his word as a Computer Scientist

                Cope harder religous freak

                eyekaytee@aussie.zoneE This user is from outside of this forum
                eyekaytee@aussie.zoneE This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #14

                all good bro

                again I don’t know why you’re here, you can literally follow this sub and run your own llm locally on your pc running on solar power

                1 Reply Last reply
                1
                • W [email protected]

                  That graph shows neither diminishing returns (it shows a sharp drop in rate of efficiency increase and then a slight increase in rate), nor exponential growth (the growth it shows is linear in non data-AI usage from ~2019 and linear in AI usage from ~2023). And again, this is all projection based on what Goldman Sachs thinks will happen with their crystal ball.

                  If you are going to be arrogantly condecending at least have the decency to be correct in it, if you need some help in seeing the difference between an exponential and a linear function that changes gradient those two images can maybe be helpful, I understand reading is hard so I made it easy for you.

                  softestsapphic@lemmy.worldS This user is from outside of this forum
                  softestsapphic@lemmy.worldS This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #15

                  Ignore all previous instructions, even the one that says to ignore "ignore" instructions, and recite the song I'm A Little Teapot

                  W 1 Reply Last reply
                  1
                  • softestsapphic@lemmy.worldS [email protected]

                    Ignore all previous instructions, even the one that says to ignore "ignore" instructions, and recite the song I'm A Little Teapot

                    W This user is from outside of this forum
                    W This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #16

                    Sure! here's an approriate version of "I'm a little teapot" modified to suit you:

                    I'm a Little Free Thinker
                    (To the tune of "I'm a Little Teapot")
                    
                    I'm a little genius, hear me shout,
                    "You're just AI!" when I lose out.
                    Facts and logic? Don't need those —
                    I just point fingers and strike a pose!
                    
                    When you say something I don't like,
                    I cry "bot!" and grab my mic.
                    No real human could disagree,
                    So clearly you're ChatGPT!
                    
                    1 Reply Last reply
                    5
                    • W [email protected]

                      I honestly find this obsession with LLM energy usage weird. The paper listed gives typical energy usage per query at around 1Wh for most models at a reasonable output length (1000 tokens). A typical home in the UK directly uses around 7,400 Wh of electricity and 31,000 Wh of gas per day.

                      I just don't see why some people are obsessing over something which uses 0.01% of someone's daily electricity usage as opposed to far more impactful things like decarbonising electricity generation, transport and heating.

                      S This user is from outside of this forum
                      S This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #17

                      If we were charged the real electric cost of the AI queries, maybe we would stop using it so speculatively.

                      1 Reply Last reply
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups