Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Lemmy Shitpost
  3. Lemmy be like

Lemmy be like

Scheduled Pinned Locked Moved Lemmy Shitpost
419 Posts 150 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • M [email protected]

    I'm using "good" in almost a moral sense. The quality of output from LLMs and generative AI is already about as good as it can get from a technical standpoint, continuing to throw money and data at it will only result in minimal improvement.

    What I mean by "good AI" is the potential of new types of AI models to be trained for things like diagnosing cancer, and and other predictive tasks that we haven't thought of yet that actually have the potential to help humanity (and not just put artists and authors out of their jobs).

    The work of training new, useful AI models is going to be done by scientists and researchers, probably on a limited budgets because there won't be a clear profit motive, and they won't be able to afford thousands of $20,000 GPUs like are being thrown at LLMs and generative AI today. But as the current AI race crashes and burns, the used hardware of today will be more affordable and hopefully actually get used for useful AI projects.

    N This user is from outside of this forum
    N This user is from outside of this forum
    [email protected]
    wrote last edited by
    #274

    Ok. Thanks for clarifying.

    Although I am pretty sure AI is already used in the medical field for research and diagnosis. This "AI everywhere" trend you are seeing is the result of everyone trying to stick and use AI in every which way.

    The thing about the AI boom is that lots of money is being invested into all fields. A bubble pop would result in investment money drying up everywhere, not make access to AI more affordable as you are suggesting.

    1 Reply Last reply
    0
    • anonomouswolf@lemmy.worldA [email protected]

      https://andymasley.substack.com/p/a-cheat-sheet-for-conversations-about

      brobot9000@lemmy.worldB This user is from outside of this forum
      brobot9000@lemmy.worldB This user is from outside of this forum
      [email protected]
      wrote last edited by
      #275

      Not clicking on a substack link. Fucking Nazi promoting shit website

      1 Reply Last reply
      2
      • E [email protected]

        I never said that.

        All I'm saying is just because The Internet caused library use to plummet doesn't mean Internet = Bad.

        P This user is from outside of this forum
        P This user is from outside of this forum
        [email protected]
        wrote last edited by
        #276

        It might. Like, maybe a little?

        Oddly, you're the one kind of lacking nuance here. I'd be willing to oppose the Internet in certain contexts. It certainly feels less and less useful as it's consumed by AI spam anyway.

        1 Reply Last reply
        0
        • K [email protected]

          “Guns don’t kill people, people kill people”

          Edit:

          Controversial reply, apparently, but this is literally part of the script to a Philosophy Tube video (relevant part is 8:40 - 20:10)

          We sometimes think that technology is essentially neutral. It can have good or bad effects, and it might be really important who controls it. But a tool, many people like to think, is just a tool. "Guns don't kill people, people do." But some philosophers have argued that technology can have values built into it that we may not realise.

          ...

          The philosopher Don Idhe says tech can open or close possibilities. It's not just about its function or who controls it. He says technology can provide a framework for action.

          ...

          Martin Heidegger was a student of Husserl's, and he wrote about the ways that we experience the world when we use a piece of technology. His most famous example was a hammer. He said when you use one you don't even think about the hammer. You focus on the nail. The hammer almost disappears in your experience. And you just focus on the task that needs to be performed.

          Another example might be a keyboard. Once you get proficient at typing, you almost stop experiencing the keyboard. Instead, your primary experience is just of the words that you're typing on the screen. It's only when it breaks or it doesn't do what we want it to do, that it really becomes visible as a piece of technology. The rest of the time it's just the medium through which we experience the world.

          Heidegger talks about technology withdrawing from our attention. Others say that technology becomes transparent. We don't experience it. We experience the world through it. Heidegger says that technology comes with its own way of seeing.

          ...

          Now some of you are looking at me like "Bull sh*t. A person using a hammer is just a person using a hammer!" But there might actually be some evidence from neurology to support this.

          If you give a monkey a rake that it has to use to reach a piece of food, then the neurons in its brain that fire when there's a visual stimulus near its hand start firing when there's a stimulus near the end of the rake, too! The monkey's brain extends its sense of the monkey body to include the tool!

          And now here's the final step. The philosopher Bruno Latour says that when this happens, when the technology becomes transparent enough to get incorporated into our sense of self and our experience of the world, a new compound entity is formed.

          A person using a hammer is actually a new subject with its own way of seeing - 'hammerman.' That's how technology provides a framework for action and being. Rake + monkey = rakemonkey. Makeup + girl is makeupgirl, and makeupgirl experiences the world differently, has a different kind of subjectivity because the tech lends us its way of seeing.

          You think guns don't kill people, people do? Well, gun + man creates a new entity with new possibilities for experience and action - gunman!

          So if we're onto something here with this idea that tech can withdraw from our attention and in so doing create new subjects with new ways of seeing, then it makes sense to ask when a new piece of technology comes along, what kind of people will this turn us into.

          I thought that we were pretty solidly past the idea that anything is “just a tool” after seeing Twitler scramble Grok’s innards to advance his personal politics.

          Like, if you still had any lingering belief that AI is “like a hammer”, that really should’ve extinguished it.

          But I guess some people see that as an aberrant misuse of AI, and not an indication that all AI has an agenda baked into it, even if it’s more subtle.

          I This user is from outside of this forum
          I This user is from outside of this forum
          [email protected]
          wrote last edited by
          #277

          We once had played this game with friends where you get a word stuck on your forehead and you have to guess what are you.

          One guy got C4 (as in explosive) to guess and he failed. I remember that we had to agree with each other whether C4 is or is not a weapon. Main idea was that explosives are comparatively rarely used in actual killing opposed to other things like mining and such. Parallel idea was that is Knife a weapon?

          But ultimately we agreed that C4 is not a weapon. It was invented not primarily to to kill or injure. Opposed to guns, that are only for killing or injuring.

          Take guns away, people will kill with literally anything else. But give an easy access to guns, people will kill with them. Gun is not a tool, it is a weapon by design.

          1 Reply Last reply
          0
          • M [email protected]

            I firmly believe we won't get most of the interesting, "good" AI until after this current AI bubble bursts and goes down in flames. Once AI hardware is cheap interesting people will use it to make cool things. But right now, the big players in the space are drowning out anyone who might do real AI work that has potential, by throwing more and more hardware and money at LLMs and generative AI models because they don't understand the technology and see it as a way to get rich and powerful quickly.

            H This user is from outside of this forum
            H This user is from outside of this forum
            [email protected]
            wrote last edited by
            #278

            I don't know if the current AI phase is a bubble, but i agree with you that if it were a bubble and burst, it wouldn't somehow stop or end AI, but cause a new wave of innovation instead.

            I've seen many AI opponents imply otherwise. When the dotcom bubble burst, the internet didn't exactly die.

            1 Reply Last reply
            0
            • E [email protected]

              Siri has privacy issues, and only works when connected to the internet.

              What are the downsides of me running my own local LLM? I've named many benefits privacy being one of them.

              rushlana@lemmy.blahaj.zoneR This user is from outside of this forum
              rushlana@lemmy.blahaj.zoneR This user is from outside of this forum
              [email protected]
              wrote last edited by
              #279

              Voice recognition is not limited to siri, I just used the most know exemple. Local assitant have existed long before LLMs and didn't require this much ressources.
              You are once again moving the goal post. Find one real world use that offset the downside.

              E 1 Reply Last reply
              0
              • N [email protected]

                V This user is from outside of this forum
                V This user is from outside of this forum
                [email protected]
                wrote last edited by
                #280

                Much love

                N 1 Reply Last reply
                2
                • archmageazor@lemmy.worldA [email protected]

                  I find it very funny how just a mere mention of the two letters A and I will cause some people to seethe and fume, and go on rants about how much they hate AI, like a conservative upon seeing the word "pronouns."

                  V This user is from outside of this forum
                  V This user is from outside of this forum
                  [email protected]
                  wrote last edited by
                  #281

                  One of these topics is about class consciousness, those other is about human rights.

                  An AI is not a person.

                  Someone with they/them pronouns is a person.

                  They have no business being compared to one another!

                  archmageazor@lemmy.worldA H 2 Replies Last reply
                  11
                  • V [email protected]

                    One of these topics is about class consciousness, those other is about human rights.

                    An AI is not a person.

                    Someone with they/them pronouns is a person.

                    They have no business being compared to one another!

                    archmageazor@lemmy.worldA This user is from outside of this forum
                    archmageazor@lemmy.worldA This user is from outside of this forum
                    [email protected]
                    wrote last edited by
                    #282

                    It's a comparison of people, not of subjects. In becoming blind with rage upon seeing the letters A and I you act the same as a conservative person seeing the word "pronouns."

                    V J 2 Replies Last reply
                    7
                    • archmageazor@lemmy.worldA [email protected]

                      It's a comparison of people, not of subjects. In becoming blind with rage upon seeing the letters A and I you act the same as a conservative person seeing the word "pronouns."

                      V This user is from outside of this forum
                      V This user is from outside of this forum
                      [email protected]
                      wrote last edited by [email protected]
                      #283

                      Well if baseless bitching can keep homophobia alive and well, then it’s clear the strategy works.

                      It is always better to see and to write a sound argument, but barring that, perpetuating negativity is pretty effective, esp. on the internet.

                      I see what you’re getting at, though!

                      1 Reply Last reply
                      3
                      • rushlana@lemmy.blahaj.zoneR [email protected]

                        Voice recognition is not limited to siri, I just used the most know exemple. Local assitant have existed long before LLMs and didn't require this much ressources.
                        You are once again moving the goal post. Find one real world use that offset the downside.

                        E This user is from outside of this forum
                        E This user is from outside of this forum
                        [email protected]
                        wrote last edited by
                        #284

                        I've already mention drafting documents and translating documents

                        rushlana@lemmy.blahaj.zoneR 1 Reply Last reply
                        0
                        • T [email protected]

                          So everything related to AI is negative ?

                          If so do you understand why we can't have any conversation on the subject ?

                          P This user is from outside of this forum
                          P This user is from outside of this forum
                          [email protected]
                          wrote last edited by
                          #285

                          Did I say that?
                          Show me the place where I said that. Show it to me.
                          Come on. Show me the place where I said everything related to AI is negative. Show me even a place where you could reasonably construe that's what I meant.

                          If you're talking about why we can't have a conversation, take a long hard look in the fucking mirror you goddamn hypocrite.

                          T C 2 Replies Last reply
                          0
                          • V [email protected]

                            One of these topics is about class consciousness, those other is about human rights.

                            An AI is not a person.

                            Someone with they/them pronouns is a person.

                            They have no business being compared to one another!

                            H This user is from outside of this forum
                            H This user is from outside of this forum
                            [email protected]
                            wrote last edited by
                            #286

                            Calling AI not a person is going to be a slur in the future, you insensitive meatbag

                            J 1 Reply Last reply
                            0
                            • I [email protected]

                              I didn't like that movie back then, I thought it was too on the nose and weird.

                              But wow, this has aged like fine wine, that clip was amazing

                              When are we going to have actual violence against androids

                              G This user is from outside of this forum
                              G This user is from outside of this forum
                              [email protected]
                              wrote last edited by
                              #287

                              Yes. When I first saw it, I thought it was soppy, depressing and weird. Now I'm just wowed by the accurate portrayal of human nature.

                              When someone says that plants are people, they will be respected as spiritual or written off as a weirdo. Saying that animals are people make for some really contentious debates. But saying that people are people is something wars are fought over. We'll get there once androids are enough like us.

                              1 Reply Last reply
                              0
                              • E [email protected]

                                The Internet kind of was turned lose on an unsuspecting public. Social media has and still is causing a lot of harm.

                                Did you really compare every household having a nuclear reactor with people having access to AI?

                                How's is that even remotely a fair comparison.

                                To me the Internet being released on people and AI being released on people is more of a fair comparison.

                                Both can do lots of harm and good, both will probably cost a lot of people their jobs etc.

                                P This user is from outside of this forum
                                P This user is from outside of this forum
                                [email protected]
                                wrote last edited by
                                #288

                                You know that the public got trickle-fed the internet for decades before it was ubiquitous in everyone house, and then another decade before it was ubiquitous in everyone's pocket. People had literal decades to learn how to protect themselves and for the job market to adjust. During that time, there was lots of research and information on how to protect yourself, and although regulation mostly failed to do anything, the learning material was adapted for all ages and was promoted.

                                Meanwhile LLMs are at least as impactful as the internet, and were released to the public almost without notice. Research on it's affects is being done now that it's already too late, and the public doesn't have any tools to protect itself. What meager material in appropriate use exist hasn't been well researched not adapted to all ages, when it isn't being presented as "the insane thoughts of doomer Luddites, not to be taken seriously" by the AI supporters.

                                The point is that people are being handed this catastrophically dangerous tool, without any training or even research into what the training should be. And we expect everything to be fine just because the tool is easy to use and convenient?

                                These companies are being allowed to bulldoze not just the economy, and the mental resilience of entire generations, for the sake of a bit of shareholder profit.

                                1 Reply Last reply
                                0
                                • K [email protected]

                                  It's funny watching you AI bros climb over each other to be the first with a what about-ism.

                                  K This user is from outside of this forum
                                  K This user is from outside of this forum
                                  [email protected]
                                  wrote last edited by
                                  #289

                                  Providing a counterexample to a claim is not whataboutism.

                                  Whataboutism involves derailing a conversation with an ad-hominem to avoid addressing someone's argument, like what you just did.

                                  K 1 Reply Last reply
                                  0
                                  • V [email protected]

                                    Much love

                                    N This user is from outside of this forum
                                    N This user is from outside of this forum
                                    [email protected]
                                    wrote last edited by
                                    #290

                                    Veri smol

                                    1 Reply Last reply
                                    1
                                    • occultist8128@infosec.pubO [email protected]

                                      Just to clarify, do you personally agree that LLMs are a subset of AI, with AI being the broader category that includes other technologies beyond LLMs?

                                      I come from a technical background and have worked in AI to help people and small businesses whether it's for farming, business decisions, and more. I can’t agree with the view that AI is inherently bad; it’s a valuable tool for many. What’s causing confusion is that 'AI' is often used to mean LLMs, which is inaccurate from a technical perspective. My goal is simply to encourage precise language use to avoid misunderstandings. People often misuse words in ways that stray far from their original etymology. For example, in Indonesia, we use the word 'literally' as it’s meant — in a literal sense, not figuratively, as it’s often misused in English nowadays. The word 'literally' in Indonesian would be translated as 'secara harfiah,' and when used, it means exactly as stated. Just like 'literally,' words should stay connected to their roots, whether Latin, Greek, or otherwise, as their original meanings give them their true value and purpose.

                                      P This user is from outside of this forum
                                      P This user is from outside of this forum
                                      [email protected]
                                      wrote last edited by
                                      #291

                                      Depending on context, jargon and terminology change.
                                      In this context, I'd agree that LLMs are a subset tech under the umbrella term "AI". But in common English discourse, LLM and AI are often used interchangeably. That's not wrong because correctness is defined by the actual real usage of native speakers of the language.

                                      I also come from a tech background. I'm a developer with 15 years experience, and I work for a large company, and my job is currently integrating LLMs and more traditional ML models into our products, because our shareholders think we need to.
                                      Specificity is useful in technical contexts, but in these public contexts, almost everyone knows what we're talking about, so the way we're using language is fine.

                                      You know it's bad when someone with my username thinks you're being too pedantic lol. Dont be a language prescriptivist.

                                      occultist8128@infosec.pubO 1 Reply Last reply
                                      0
                                      • R [email protected]

                                        Stop drinking the cool aid bro. Think of these statements critically for a second. Environmental harm? Sure. I hope you're a vegan as well.

                                        Loss of media literacy: What does this even mean? People are doing things the easy way instead of the hard way? Yes, of course cutting corners is bad, but the problem is the conditions that lead to that person choosing to cut corners, the problem is the demand for maximum efficiency at any cost, for top numbers. AI is is making a problem evident, not causing it. If you're home on a Friday after your second shift of the day, fuck yeah you want to do things easy and fast. Literacy what? Just let me watch something funny.

                                        Do you feel you've become more stupid? Do you think it's possible? Why wouild other people, who are just like you, be these puppets to be brain washed by the evil machine?

                                        Ask yourself. How are people measuring intelligence? Creativity? How many people were in these studies and who funded them?
                                        If we had the measuring instrument needed to actually make categorizations like "People are losing intelligence." Psychologists wouldn't still be arguing over the exact definition of intelligence.

                                        Stop thinking of AI as a boogieman inside people's heads. It is a machine. People using the machine to achieve a mundane goal, it doesn't mean the machine created the goal or is responsible for everything wrong with humanity.

                                        Huge increase in inequality? What? Brother AI is a machine. It is the robber barons that are exploiting you and all of the working class to get obsenely rich. AI is the tool they're using. AI can't be held accountable. AI has no will. AI is a tool. It is people that are increasing inequality. It is the system held in place by these people that rewards exploitation and encourages to look at the evil machine instead. And don't even use it, the less you know, the better. If you never engage with AI technology, you'll believe everything I say about how evil it is.

                                        P This user is from outside of this forum
                                        P This user is from outside of this forum
                                        [email protected]
                                        wrote last edited by
                                        #292

                                        That's some real "guns don't kill people, people kill people" apologist speak.
                                        The only way to stop a bad robber Baron using AI is a good robber Baron using AI? C'mon.
                                        I know that's not exactly what you said, but it applicable.

                                        I work with these tools every day, both as a tool my employer wants me to use, and because I'm part of the problem: I integrate LLMs into my company's products, to make them "smart". I'm familiar with the tech. This isn't coming from a place if ignorance where I've just been swayed by Luddites due to my lack of exposure.

                                        When I use these tools I absolutely become temporarily stupider. I get into the rhythm of using it for everything instead of using it selectively.
                                        But I'm middle aged; which means both that I'll never be as good with it but also that it's harder to affect me long term, I've already largely finished developing my brain. I only worry that it'll be a brand new source of misinformation for my generation, but I worry that that (with the escalating attacks on our school system) it'll result in generations of kids who grow up without having developed certain mental skills related to problem solving, because they'll have always relied on it to solve their problems.

                                        I know it's not the tool's fault, but when a tool can do easily cause massive accidental harm, it's easiest to just regulate the tool to curb the harm.

                                        1 Reply Last reply
                                        0
                                        • E [email protected]

                                          The point is, most wouldn't. It's of little real use currently, especially the LLM bullshit. The communities would have infinitely better things to pit resources to.

                                          B This user is from outside of this forum
                                          B This user is from outside of this forum
                                          [email protected]
                                          wrote last edited by
                                          #293

                                          The point is, most wouldn’t.

                                          People currently want it despite it being stupid which is why corporations are in a frenzy to be the monopoly that provides it. People want all sorts of stupid things. A different system wouldn't change that.

                                          E 1 Reply Last reply
                                          1
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups