Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Lemmy Shitpost
  3. Lemmy be like

Lemmy be like

Scheduled Pinned Locked Moved Lemmy Shitpost
419 Posts 150 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • N [email protected]

    AI is good and cheap now because businesses are funding it at a loss, so not sure what you mean here.

    The problem is that it's cheap, so that anyone can make whatever they want and most people make low quality slop, hence why it's not "good" in your eyes.

    Making a cheap or efficient AI doesn't help the end user in any way.

    M This user is from outside of this forum
    M This user is from outside of this forum
    [email protected]
    wrote last edited by
    #261

    I'm using "good" in almost a moral sense. The quality of output from LLMs and generative AI is already about as good as it can get from a technical standpoint, continuing to throw money and data at it will only result in minimal improvement.

    What I mean by "good AI" is the potential of new types of AI models to be trained for things like diagnosing cancer, and and other predictive tasks that we haven't thought of yet that actually have the potential to help humanity (and not just put artists and authors out of their jobs).

    The work of training new, useful AI models is going to be done by scientists and researchers, probably on a limited budgets because there won't be a clear profit motive, and they won't be able to afford thousands of $20,000 GPUs like are being thrown at LLMs and generative AI today. But as the current AI race crashes and burns, the used hardware of today will be more affordable and hopefully actually get used for useful AI projects.

    N 1 Reply Last reply
    1
    • brobot9000@lemmy.worldB [email protected]

      Do you really need to have a list of why people are sick of LLM and Ai slop?

      Ai is literally making people dumber:

      https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf

      https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/

      They are a massive privacy risk:

      https://www.youtube.com/watch?v=AyH7zoP-JOg&t=3015s

      https://theconversation.com/ai-tools-collect-and-store-data-about-you-from-all-your-devices-heres-how-to-be-aware-of-what-youre-revealing-251693

      Are being used to push fascist ideologies into every aspect of the internet:

      https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/

      And they are a massive environmental disaster:

      https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117

      https://www.forbes.com/sites/cindygordon/2024/02/25/ai-is-accelerating-the-loss-of-our-scarcest-natural-resource-water/

      Stop being a corporate apologist and stop wreaking the environment with this shit technology.

      Edit: thank you to every Ai apologist outing themselves in the comments. Thank you for making blocking you easy.

      anonomouswolf@lemmy.worldA This user is from outside of this forum
      anonomouswolf@lemmy.worldA This user is from outside of this forum
      [email protected]
      wrote last edited by
      #262

      https://andymasley.substack.com/p/a-cheat-sheet-for-conversations-about

      brobot9000@lemmy.worldB 1 Reply Last reply
      1
      • E [email protected]
        This post did not contain any content.
        archmageazor@lemmy.worldA This user is from outside of this forum
        archmageazor@lemmy.worldA This user is from outside of this forum
        [email protected]
        wrote last edited by
        #263

        I find it very funny how just a mere mention of the two letters A and I will cause some people to seethe and fume, and go on rants about how much they hate AI, like a conservative upon seeing the word "pronouns."

        V 1 Reply Last reply
        8
        • D [email protected]

          Machines replacing people is not a bad thing if they can actually perform the same or better; the solution to unemployment would be Universal Basic Income.

          C This user is from outside of this forum
          C This user is from outside of this forum
          [email protected]
          wrote last edited by
          #264

          Unfortunately, UBI is just a solution to unemployment. Another solution (and the one apparently preferred by the billionaire rulers of this planet) is letting the unemployed rot and die.

          1 Reply Last reply
          5
          • J [email protected]

            Texas has just asked residents to take less showers while datacenters made specifically for LLM training continue operating.

            This is more like feeling bad for not using a paper straw while local factory dumps all their oil change into the community river.

            anonomouswolf@lemmy.worldA This user is from outside of this forum
            anonomouswolf@lemmy.worldA This user is from outside of this forum
            [email protected]
            wrote last edited by [email protected]
            #265

            Maybe they should cut down on Beef first, it uses exponentially more water than AI and CO2

            • 1 kg Beef = 60kg CO2 - source
            • 1000km Return flight = 314kg CO2 - source
            • 1 Bitcoin transaction = 645kg of CO2 - source
            • 1000 AI prompts = 3kg of CO2 - source
            C 1 Reply Last reply
            0
            • rushlana@lemmy.blahaj.zoneR [email protected]

              I ask for an example making up for the downside everyone as to pay.

              so, no ! A better shutter puller or a maybe marginally better vocal assitant is not gonna cut it.
              And again that's stuff siri and domotic tools where able to do since 2014 at a minimum.

              E This user is from outside of this forum
              E This user is from outside of this forum
              [email protected]
              wrote last edited by
              #266

              Siri has privacy issues, and only works when connected to the internet.

              What are the downsides of me running my own local LLM? I've named many benefits privacy being one of them.

              rushlana@lemmy.blahaj.zoneR 1 Reply Last reply
              0
              • C [email protected]

                I'm a lot more sick of the word 'slop' than I am of AI. Please, when you criticize AI, form an original thought next time.

                G This user is from outside of this forum
                G This user is from outside of this forum
                [email protected]
                wrote last edited by
                #267

                Yes! Will people stop with their sloppy criticisms?

                1 Reply Last reply
                1
                • E [email protected]
                  This post did not contain any content.
                  N This user is from outside of this forum
                  N This user is from outside of this forum
                  [email protected]
                  wrote last edited by
                  #268

                  V L 2 Replies Last reply
                  33
                  • brobot9000@lemmy.worldB [email protected]

                    Do you really need to have a list of why people are sick of LLM and Ai slop?

                    Ai is literally making people dumber:

                    https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf

                    https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/

                    They are a massive privacy risk:

                    https://www.youtube.com/watch?v=AyH7zoP-JOg&t=3015s

                    https://theconversation.com/ai-tools-collect-and-store-data-about-you-from-all-your-devices-heres-how-to-be-aware-of-what-youre-revealing-251693

                    Are being used to push fascist ideologies into every aspect of the internet:

                    https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/

                    And they are a massive environmental disaster:

                    https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117

                    https://www.forbes.com/sites/cindygordon/2024/02/25/ai-is-accelerating-the-loss-of-our-scarcest-natural-resource-water/

                    Stop being a corporate apologist and stop wreaking the environment with this shit technology.

                    Edit: thank you to every Ai apologist outing themselves in the comments. Thank you for making blocking you easy.

                    E This user is from outside of this forum
                    E This user is from outside of this forum
                    [email protected]
                    wrote last edited by [email protected]
                    #269

                    Ai is literally making people dumber:
                    https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf

                    We surveyed 319 knowledge workers who use GenAI tools (e.g.,
                    ChatGPT, Copilot) at work at least once per week, to model how
                    they enact critical thinking when using GenAI tools, and how GenAI
                    affects their perceived effort of thinking critically. Analysing 936
                    real-world GenAI tool use examples our participants shared, we
                    find that knowledge workers engage in critical thinking primarily
                    to ensure the quality of their work, e.g. by verifying outputs against
                    external sources. Moreover, while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill
                    for independent problem-solving. Higher confidence in GenAI’s
                    ability to perform a task is related to less critical thinking effort.
                    When using GenAI tools, the effort invested in critical thinking
                    shifts from information gathering to information verification; from
                    problem-solving to AI response integration; and from task execution to task stewardship. Knowledge workers face new challenges
                    in critical thinking as they incorporate GenAI into their knowledge
                    workflows. To that end, our work suggests that GenAI tools need
                    to be designed to support knowledge workers’ critical thinking by
                    addressing their awareness, motivation, and ability barriers.

                    I would not say "can potentially lead to long-term overreliance on the tool and diminished skill
                    for independent problem-solving" equals to "literally making people dumber". A sample size of 319 isn't really representative anyways, and they mainly had a sample of a specific type of people. People switch from searching to verifying, which doesn't sound too bad if done correctly. They associate critical thinking with verifying everything ("Higher confidence in GenAI’s
                    ability to perform a task is related to less critical thinking effort"), not sure I agree on this.

                    This study is also only aimed at people working instead of regular use. I personally discovered so many things with GenAI, and know to always question what the model says when it comes to specific topics or questions, because they tend to hallucinate. You could also say internet made people dumber, but those who know how to use it will be smarter.

                    https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/

                    They had to write an essay in 20 minutes... obviously most people would just generate the whole thing and fix little problems here and there, but if you have to think less because you're just fixing stuff instead on inventing.. well yea, you use your brain less. Doesn't make you dumb? It's a bit like saying paying by card makes you dumber because you use less of your brain compared to paying in cash because you have to count how much you need to give, and how much you need to get back.

                    Yes, if you get helped by a tool or someone, it will be less intensive for your brain. Who would have thought?!

                    1 Reply Last reply
                    3
                    • B [email protected]

                      Rockstar games: 6k employees 20 kwatt hours per square foot https://esource.bizenergyadvisor.com/article/large-offices 150 square feet per employee https://unspot.com/blog/how-much-office-space-do-we-need-per-employee/#%3A~%3Atext=The+needed+workspace+may+vary+in+accordance

                      18,000,000,000 watt hours

                      vs

                      10,000,000,000 watt hours for ChatGPT training

                      https://www.washington.edu/news/2023/07/27/how-much-energy-does-chatgpt-use/

                      Yet there's no hand wringing over the environmental destruction caused by 3d gaming.

                      J This user is from outside of this forum
                      J This user is from outside of this forum
                      [email protected]
                      wrote last edited by
                      #270

                      Semi non sequitur argument aside, your math seems to be off.

                      I double checked my quick phone calculations and using figures provided, Rockstar games with their office space energy use is roughly 18,000,000 (18 million) kWh, not 18,000,000,000 (18 billion).

                      B 1 Reply Last reply
                      0
                      • brobot9000@lemmy.worldB [email protected]

                        Do you really need to have a list of why people are sick of LLM and Ai slop?

                        Ai is literally making people dumber:

                        https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf

                        https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/

                        They are a massive privacy risk:

                        https://www.youtube.com/watch?v=AyH7zoP-JOg&t=3015s

                        https://theconversation.com/ai-tools-collect-and-store-data-about-you-from-all-your-devices-heres-how-to-be-aware-of-what-youre-revealing-251693

                        Are being used to push fascist ideologies into every aspect of the internet:

                        https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/

                        And they are a massive environmental disaster:

                        https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117

                        https://www.forbes.com/sites/cindygordon/2024/02/25/ai-is-accelerating-the-loss-of-our-scarcest-natural-resource-water/

                        Stop being a corporate apologist and stop wreaking the environment with this shit technology.

                        Edit: thank you to every Ai apologist outing themselves in the comments. Thank you for making blocking you easy.

                        E This user is from outside of this forum
                        E This user is from outside of this forum
                        [email protected]
                        wrote last edited by [email protected]
                        #271

                        They are a massive privacy risk:

                        I do agree on this, but at this point everyone uses instagram, snapchat, discord and whatever to share their DMs which are probably being sniffed by the NSA and used by companies for profiling. People are never going to change.

                        1 Reply Last reply
                        1
                        • brobot9000@lemmy.worldB [email protected]

                          Do you really need to have a list of why people are sick of LLM and Ai slop?

                          Ai is literally making people dumber:

                          https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf

                          https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/

                          They are a massive privacy risk:

                          https://www.youtube.com/watch?v=AyH7zoP-JOg&t=3015s

                          https://theconversation.com/ai-tools-collect-and-store-data-about-you-from-all-your-devices-heres-how-to-be-aware-of-what-youre-revealing-251693

                          Are being used to push fascist ideologies into every aspect of the internet:

                          https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/

                          And they are a massive environmental disaster:

                          https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117

                          https://www.forbes.com/sites/cindygordon/2024/02/25/ai-is-accelerating-the-loss-of-our-scarcest-natural-resource-water/

                          Stop being a corporate apologist and stop wreaking the environment with this shit technology.

                          Edit: thank you to every Ai apologist outing themselves in the comments. Thank you for making blocking you easy.

                          E This user is from outside of this forum
                          E This user is from outside of this forum
                          [email protected]
                          wrote last edited by
                          #272

                          Are being used to push fascist ideologies into every aspect of the internet:

                          Everything can be used for that. If anything, I believe AI models are too restricted and tend not to argue on controversial subjects, which prevents you from learning anything. Censorship sucks

                          1 Reply Last reply
                          2
                          • K [email protected]

                            Don't be obtuse, you walnut. I'm obviously not equating medical technology with 12-fingered anime girls and plagiarism.

                            V This user is from outside of this forum
                            V This user is from outside of this forum
                            [email protected]
                            wrote last edited by
                            #273

                            You still mix all AI stuff in, what about hating LLMs and image generators?

                            1 Reply Last reply
                            0
                            • M [email protected]

                              I'm using "good" in almost a moral sense. The quality of output from LLMs and generative AI is already about as good as it can get from a technical standpoint, continuing to throw money and data at it will only result in minimal improvement.

                              What I mean by "good AI" is the potential of new types of AI models to be trained for things like diagnosing cancer, and and other predictive tasks that we haven't thought of yet that actually have the potential to help humanity (and not just put artists and authors out of their jobs).

                              The work of training new, useful AI models is going to be done by scientists and researchers, probably on a limited budgets because there won't be a clear profit motive, and they won't be able to afford thousands of $20,000 GPUs like are being thrown at LLMs and generative AI today. But as the current AI race crashes and burns, the used hardware of today will be more affordable and hopefully actually get used for useful AI projects.

                              N This user is from outside of this forum
                              N This user is from outside of this forum
                              [email protected]
                              wrote last edited by
                              #274

                              Ok. Thanks for clarifying.

                              Although I am pretty sure AI is already used in the medical field for research and diagnosis. This "AI everywhere" trend you are seeing is the result of everyone trying to stick and use AI in every which way.

                              The thing about the AI boom is that lots of money is being invested into all fields. A bubble pop would result in investment money drying up everywhere, not make access to AI more affordable as you are suggesting.

                              1 Reply Last reply
                              0
                              • anonomouswolf@lemmy.worldA [email protected]

                                https://andymasley.substack.com/p/a-cheat-sheet-for-conversations-about

                                brobot9000@lemmy.worldB This user is from outside of this forum
                                brobot9000@lemmy.worldB This user is from outside of this forum
                                [email protected]
                                wrote last edited by
                                #275

                                Not clicking on a substack link. Fucking Nazi promoting shit website

                                1 Reply Last reply
                                2
                                • E [email protected]

                                  I never said that.

                                  All I'm saying is just because The Internet caused library use to plummet doesn't mean Internet = Bad.

                                  P This user is from outside of this forum
                                  P This user is from outside of this forum
                                  [email protected]
                                  wrote last edited by
                                  #276

                                  It might. Like, maybe a little?

                                  Oddly, you're the one kind of lacking nuance here. I'd be willing to oppose the Internet in certain contexts. It certainly feels less and less useful as it's consumed by AI spam anyway.

                                  1 Reply Last reply
                                  0
                                  • K [email protected]

                                    “Guns don’t kill people, people kill people”

                                    Edit:

                                    Controversial reply, apparently, but this is literally part of the script to a Philosophy Tube video (relevant part is 8:40 - 20:10)

                                    We sometimes think that technology is essentially neutral. It can have good or bad effects, and it might be really important who controls it. But a tool, many people like to think, is just a tool. "Guns don't kill people, people do." But some philosophers have argued that technology can have values built into it that we may not realise.

                                    ...

                                    The philosopher Don Idhe says tech can open or close possibilities. It's not just about its function or who controls it. He says technology can provide a framework for action.

                                    ...

                                    Martin Heidegger was a student of Husserl's, and he wrote about the ways that we experience the world when we use a piece of technology. His most famous example was a hammer. He said when you use one you don't even think about the hammer. You focus on the nail. The hammer almost disappears in your experience. And you just focus on the task that needs to be performed.

                                    Another example might be a keyboard. Once you get proficient at typing, you almost stop experiencing the keyboard. Instead, your primary experience is just of the words that you're typing on the screen. It's only when it breaks or it doesn't do what we want it to do, that it really becomes visible as a piece of technology. The rest of the time it's just the medium through which we experience the world.

                                    Heidegger talks about technology withdrawing from our attention. Others say that technology becomes transparent. We don't experience it. We experience the world through it. Heidegger says that technology comes with its own way of seeing.

                                    ...

                                    Now some of you are looking at me like "Bull sh*t. A person using a hammer is just a person using a hammer!" But there might actually be some evidence from neurology to support this.

                                    If you give a monkey a rake that it has to use to reach a piece of food, then the neurons in its brain that fire when there's a visual stimulus near its hand start firing when there's a stimulus near the end of the rake, too! The monkey's brain extends its sense of the monkey body to include the tool!

                                    And now here's the final step. The philosopher Bruno Latour says that when this happens, when the technology becomes transparent enough to get incorporated into our sense of self and our experience of the world, a new compound entity is formed.

                                    A person using a hammer is actually a new subject with its own way of seeing - 'hammerman.' That's how technology provides a framework for action and being. Rake + monkey = rakemonkey. Makeup + girl is makeupgirl, and makeupgirl experiences the world differently, has a different kind of subjectivity because the tech lends us its way of seeing.

                                    You think guns don't kill people, people do? Well, gun + man creates a new entity with new possibilities for experience and action - gunman!

                                    So if we're onto something here with this idea that tech can withdraw from our attention and in so doing create new subjects with new ways of seeing, then it makes sense to ask when a new piece of technology comes along, what kind of people will this turn us into.

                                    I thought that we were pretty solidly past the idea that anything is “just a tool” after seeing Twitler scramble Grok’s innards to advance his personal politics.

                                    Like, if you still had any lingering belief that AI is “like a hammer”, that really should’ve extinguished it.

                                    But I guess some people see that as an aberrant misuse of AI, and not an indication that all AI has an agenda baked into it, even if it’s more subtle.

                                    I This user is from outside of this forum
                                    I This user is from outside of this forum
                                    [email protected]
                                    wrote last edited by
                                    #277

                                    We once had played this game with friends where you get a word stuck on your forehead and you have to guess what are you.

                                    One guy got C4 (as in explosive) to guess and he failed. I remember that we had to agree with each other whether C4 is or is not a weapon. Main idea was that explosives are comparatively rarely used in actual killing opposed to other things like mining and such. Parallel idea was that is Knife a weapon?

                                    But ultimately we agreed that C4 is not a weapon. It was invented not primarily to to kill or injure. Opposed to guns, that are only for killing or injuring.

                                    Take guns away, people will kill with literally anything else. But give an easy access to guns, people will kill with them. Gun is not a tool, it is a weapon by design.

                                    1 Reply Last reply
                                    0
                                    • M [email protected]

                                      I firmly believe we won't get most of the interesting, "good" AI until after this current AI bubble bursts and goes down in flames. Once AI hardware is cheap interesting people will use it to make cool things. But right now, the big players in the space are drowning out anyone who might do real AI work that has potential, by throwing more and more hardware and money at LLMs and generative AI models because they don't understand the technology and see it as a way to get rich and powerful quickly.

                                      H This user is from outside of this forum
                                      H This user is from outside of this forum
                                      [email protected]
                                      wrote last edited by
                                      #278

                                      I don't know if the current AI phase is a bubble, but i agree with you that if it were a bubble and burst, it wouldn't somehow stop or end AI, but cause a new wave of innovation instead.

                                      I've seen many AI opponents imply otherwise. When the dotcom bubble burst, the internet didn't exactly die.

                                      1 Reply Last reply
                                      0
                                      • E [email protected]

                                        Siri has privacy issues, and only works when connected to the internet.

                                        What are the downsides of me running my own local LLM? I've named many benefits privacy being one of them.

                                        rushlana@lemmy.blahaj.zoneR This user is from outside of this forum
                                        rushlana@lemmy.blahaj.zoneR This user is from outside of this forum
                                        [email protected]
                                        wrote last edited by
                                        #279

                                        Voice recognition is not limited to siri, I just used the most know exemple. Local assitant have existed long before LLMs and didn't require this much ressources.
                                        You are once again moving the goal post. Find one real world use that offset the downside.

                                        E 1 Reply Last reply
                                        0
                                        • N [email protected]

                                          V This user is from outside of this forum
                                          V This user is from outside of this forum
                                          [email protected]
                                          wrote last edited by
                                          #280

                                          Much love

                                          N 1 Reply Last reply
                                          2
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups