Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Sam Altman Says If Jobs Gets Wiped Out, Maybe They Weren’t Even “Real Work” to Start With

Sam Altman Says If Jobs Gets Wiped Out, Maybe They Weren’t Even “Real Work” to Start With

Scheduled Pinned Locked Moved Technology
technology
134 Posts 83 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • N [email protected]

    I came across this article in another Lemmy community that dislikes AI. I'm reposting instead of cross posting so that we could have a conversation about how "work" might be changing with advancements in technology.

    The headline is clickbaity because Altman was referring to how farmers who lived decades ago might perceive that the work "you and I do today" (including Altman himself), doesn't look like work.

    The fact is that most of us work far abstracted from human survival by many levels. Very few of us are farming, building shelters, protecting our families from wildlife, or doing the back breaking labor jobs that humans were forced to do generations ago.

    In my first job, which was IT support, the concept was not lost on me that all day long I pushed buttons to make computers beep in more friendly ways. There was no physical result to see, no produce to harvest, no pile of wood being transitioned from a natural to a chopped state, nothing tangible to step back and enjoy at the end of the day.

    Bankers, fashion designers, artists, video game testers, software developers and countless other professions experience something quite similar. Yet, all of these jobs do in some way add value to the human experience.

    As humanity's core needs have been met with technology requiring fewer human inputs, our focus has been able to shift to creating value in less tangible, but perhaps not less meaningful ways. This has created a more dynamic and rich life experience than any of those previous farming generations could have imagined. So while it doesn't seem like the work those farmers were accustomed to, humanity has been able to shift its attention to other types of work for the benefit of many.

    I postulate that AI - as we know it now - is merely another technological tool that will allow new layers of abstraction. At one time bookkeepers had to write in books, now software automatically encodes accounting transactions as they're made. At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

    These days we have fewer bookkeepers - most companies don't need armies of clerks anymore. But now we have more data analysts who work to understand the information and make important decisions. In the future we may need fewer software coders, and in turn, there will be many more software projects that seek to solve new problems in new ways.

    How do I know this? I think history shows us that innovations in technology always bring new problems to be solved. There is an endless reservoir of challenges to be worked on that previous generations didn't have time to think about. We are going to free minds from tasks that can be automated, and many of those minds will move on to the next level of abstraction.

    At the end of the day, I suspect we humans are biologically wired with a deep desire to output rewarding and meaningful work, and much of the results of our abstracted work is hard to see and touch. Perhaps this is why I enjoy mowing my lawn so much, no matter how advanced robotic lawn mowing machines become.

    T This user is from outside of this forum
    T This user is from outside of this forum
    [email protected]
    wrote last edited by [email protected]
    #52

    creating value

    This kind of pseudo-science is a problem.

    There is no such thing as "value". People serve capital so they don't starve to death. There will always be a need for servants. In particular capital needs massive guard labor to violently enforce privilege and inequality.

    The technologies falsely hyped as "AI" are no different. It's just another computer program used by capital to hoard privilege and violently control people. The potential for unemployment is mostly just more bullshit. These grifters are literally talking about how "AI" will battle the anti-christ. Insofar as some people might maybe someday lost some jobs, that's been the way that capitalism works for centuries. The poor will be enlisted, attacked, removed, etc. as usual.

    andrewrgross@slrpnk.netA S E 3 Replies Last reply
    7
    • J [email protected]

      Ah, I see. We in the software industry are no longer allowed to use our own terms because outsiders co-opted them.

      Noted.

      M This user is from outside of this forum
      M This user is from outside of this forum
      [email protected]
      wrote last edited by
      #53

      Grow some up.

      J 1 Reply Last reply
      0
      • N [email protected]

        I came across this article in another Lemmy community that dislikes AI. I'm reposting instead of cross posting so that we could have a conversation about how "work" might be changing with advancements in technology.

        The headline is clickbaity because Altman was referring to how farmers who lived decades ago might perceive that the work "you and I do today" (including Altman himself), doesn't look like work.

        The fact is that most of us work far abstracted from human survival by many levels. Very few of us are farming, building shelters, protecting our families from wildlife, or doing the back breaking labor jobs that humans were forced to do generations ago.

        In my first job, which was IT support, the concept was not lost on me that all day long I pushed buttons to make computers beep in more friendly ways. There was no physical result to see, no produce to harvest, no pile of wood being transitioned from a natural to a chopped state, nothing tangible to step back and enjoy at the end of the day.

        Bankers, fashion designers, artists, video game testers, software developers and countless other professions experience something quite similar. Yet, all of these jobs do in some way add value to the human experience.

        As humanity's core needs have been met with technology requiring fewer human inputs, our focus has been able to shift to creating value in less tangible, but perhaps not less meaningful ways. This has created a more dynamic and rich life experience than any of those previous farming generations could have imagined. So while it doesn't seem like the work those farmers were accustomed to, humanity has been able to shift its attention to other types of work for the benefit of many.

        I postulate that AI - as we know it now - is merely another technological tool that will allow new layers of abstraction. At one time bookkeepers had to write in books, now software automatically encodes accounting transactions as they're made. At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

        These days we have fewer bookkeepers - most companies don't need armies of clerks anymore. But now we have more data analysts who work to understand the information and make important decisions. In the future we may need fewer software coders, and in turn, there will be many more software projects that seek to solve new problems in new ways.

        How do I know this? I think history shows us that innovations in technology always bring new problems to be solved. There is an endless reservoir of challenges to be worked on that previous generations didn't have time to think about. We are going to free minds from tasks that can be automated, and many of those minds will move on to the next level of abstraction.

        At the end of the day, I suspect we humans are biologically wired with a deep desire to output rewarding and meaningful work, and much of the results of our abstracted work is hard to see and touch. Perhaps this is why I enjoy mowing my lawn so much, no matter how advanced robotic lawn mowing machines become.

        C This user is from outside of this forum
        C This user is from outside of this forum
        [email protected]
        wrote last edited by
        #54

        Apart from questionable quality of the result, a big issue to me about LLMs is the way it substitutes human interaction with other humans. Which is one of the most fundamental way humans learn, innovate and express themselves.

        No technological innovation replaced human interaction with a facsimile, that way before.

        1 Reply Last reply
        4
        • M [email protected]

          Grow some up.

          J This user is from outside of this forum
          J This user is from outside of this forum
          [email protected]
          wrote last edited by
          #55

          Dude, what age are you? 13? Log off and go play with your friends.

          1 Reply Last reply
          3
          • N [email protected]

            If your argument attacks my credibility, that's fine, you don't know me. We can find cases where developers use the technology and cases where they refuse.

            Do you have anything substantive to add to the discussion about whether AI LLMs are anything more than just a tool that allows workers to further abstract, advancing all of the professions it can touch towards any of: better / faster / cheaper / easier?

            harkmahlberg@kbin.earthH This user is from outside of this forum
            harkmahlberg@kbin.earthH This user is from outside of this forum
            [email protected]
            wrote last edited by
            #56

            Yeah, I've got something to add. The ruling class will use LLMs as a tool to lay off tens of thousands of workers to consolidate more power and wealth at the top.

            LLMs also advance no profession at all while it can still hallucinate and be manipulated by it's owners, producing more junk that requires a skilled worker to fix. Even my coworkers have said "if I have to fix everything it gives me, why didn't I just do it myself?"

            LLMs also have dire consequences outside the context of labor. Because of how easy they are to manipulate, they can be used to manufacture consent and warp public consciousness around their owners' ideals.

            LLMs are also a massive financial bubble, ready to pop and send us into a recession. Nvidia is shoveling money into companies so they can shovel it back into Nvidia.

            Would you like me to continue on about the climate?

            1 Reply Last reply
            13
            • dojan@pawb.socialD [email protected]

              At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

              I'd not put an LLM in charge of developing a framework that is meant to be used in any sort of production environment. If we're talking about them setting up the skeleton of a project, then templates have already been around for decades at this point. You also don't really set up new projects all that often.

              P This user is from outside of this forum
              P This user is from outside of this forum
              [email protected]
              wrote last edited by
              #57

              Fuck, I barely let AI make functions in my code because half the time the fuckin idiot can't even guess the correct method name and parameters when it can pull up the goddamned help page like I can or even Google the basic syntax.

              M 1 Reply Last reply
              5
              • N [email protected]

                I came across this article in another Lemmy community that dislikes AI. I'm reposting instead of cross posting so that we could have a conversation about how "work" might be changing with advancements in technology.

                The headline is clickbaity because Altman was referring to how farmers who lived decades ago might perceive that the work "you and I do today" (including Altman himself), doesn't look like work.

                The fact is that most of us work far abstracted from human survival by many levels. Very few of us are farming, building shelters, protecting our families from wildlife, or doing the back breaking labor jobs that humans were forced to do generations ago.

                In my first job, which was IT support, the concept was not lost on me that all day long I pushed buttons to make computers beep in more friendly ways. There was no physical result to see, no produce to harvest, no pile of wood being transitioned from a natural to a chopped state, nothing tangible to step back and enjoy at the end of the day.

                Bankers, fashion designers, artists, video game testers, software developers and countless other professions experience something quite similar. Yet, all of these jobs do in some way add value to the human experience.

                As humanity's core needs have been met with technology requiring fewer human inputs, our focus has been able to shift to creating value in less tangible, but perhaps not less meaningful ways. This has created a more dynamic and rich life experience than any of those previous farming generations could have imagined. So while it doesn't seem like the work those farmers were accustomed to, humanity has been able to shift its attention to other types of work for the benefit of many.

                I postulate that AI - as we know it now - is merely another technological tool that will allow new layers of abstraction. At one time bookkeepers had to write in books, now software automatically encodes accounting transactions as they're made. At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

                These days we have fewer bookkeepers - most companies don't need armies of clerks anymore. But now we have more data analysts who work to understand the information and make important decisions. In the future we may need fewer software coders, and in turn, there will be many more software projects that seek to solve new problems in new ways.

                How do I know this? I think history shows us that innovations in technology always bring new problems to be solved. There is an endless reservoir of challenges to be worked on that previous generations didn't have time to think about. We are going to free minds from tasks that can be automated, and many of those minds will move on to the next level of abstraction.

                At the end of the day, I suspect we humans are biologically wired with a deep desire to output rewarding and meaningful work, and much of the results of our abstracted work is hard to see and touch. Perhaps this is why I enjoy mowing my lawn so much, no matter how advanced robotic lawn mowing machines become.

                mp3@lemmy.caM This user is from outside of this forum
                mp3@lemmy.caM This user is from outside of this forum
                [email protected]
                wrote last edited by [email protected]
                #58

                CEO isn't an actual job either, it's just the 21st century's titre de noblesse.

                P 1 Reply Last reply
                32
                • 6 [email protected]

                  Yes. How is it relevant to moderne SWE practices?

                  M This user is from outside of this forum
                  M This user is from outside of this forum
                  [email protected]
                  wrote last edited by
                  #59

                  OP wrote 10 paragraphs and your head is still in devland.

                  1 Reply Last reply
                  2
                  • M [email protected]

                    Granting them AI status, we should recognize that they "gained their abilities" by training on the rando junk that people post on the internet.

                    I have been working with AI for computer programming, semi-seriously for 3 months, pretty intensively for the last two weeks. I have also been working with humans for computer programming for 35 years. AI's "failings" are people's failings. They don't follow directions reliably, and if you don't manage them they'll go down rabbit holes of little to no value. With management, working with AI is like an accelerated experience with an average person, so the need for management becomes even more intense - where you might let a person work independently for a week then see what needs correcting, you really need to stay on top of AI's "thought process" on more of a 15-30 minute basis. It comes down to the "hallucination rate" which is a very fuzzy metric, but it works pretty well - at a hallucination rate of 5% (95% successful responses) AI is just about on par with human workers - but faster for complex tasks, and slower for simple answers.

                    Interestingly, for the past two weeks, I have been having some success with applying human management systems to AI: controlled documents, tiered requirements-specification-details documents, etc.

                    P This user is from outside of this forum
                    P This user is from outside of this forum
                    [email protected]
                    wrote last edited by
                    #60

                    It comes down to the "hallucination rate" which is a very fuzzy metric, but it works pretty well - at a hallucination rate of 5% (95% successful responses) AI is just about on par with human workers - but faster for complex tasks, and slower for simple answers.

                    I have no idea what you're doing, but based on my own experience, your error/hallucination rate is like 1/10th of what I'd expect.

                    I've been using an AI assistant for the better part of a year, and I'd laugh at the idea that they're right even 60% of the time without CONSTANTLY reinforcing fucking BASIC directives or telling it to provide sources for every method it suggests. Like, I can't even keep the damned thing reliably in the language framework I'm working on without it falling back to the raw vendor CLI in project conversations. I'm correcting the exact same mistakes week after week because the thing is braindead and doesn't understand that you cannot use reserved keywords for your variable names. It just makes up parameters to core functions based on the question I ask it, regardless of documentation until I call it's bullshit and it gets super conciliatory and then actually double checks it's own work instead of authoritatively lying to me.

                    You're not wrong that AI makes human style mistakes, but a human can learn, or at least generally doesn't have to be taught the same fucking lesson at least once a week for a year (or gets fired well before then). AI is artificial, but there absolutely isn't any intelligence behind it, it's just a stochastic parrot that somehow comes to plausible answers that the algorithm expects that you want to hear.

                    M A 2 Replies Last reply
                    4
                    • K [email protected]

                      You drive a tractor up and down a field, is that really any more work than the rest of us?

                      P This user is from outside of this forum
                      P This user is from outside of this forum
                      [email protected]
                      wrote last edited by
                      #61

                      Which much of it now relies on GPS. My father in law just has to turn it around and line it back up for spraying.

                      1 Reply Last reply
                      1
                      • N [email protected]

                        If your argument attacks my credibility, that's fine, you don't know me. We can find cases where developers use the technology and cases where they refuse.

                        Do you have anything substantive to add to the discussion about whether AI LLMs are anything more than just a tool that allows workers to further abstract, advancing all of the professions it can touch towards any of: better / faster / cheaper / easier?

                        C This user is from outside of this forum
                        C This user is from outside of this forum
                        [email protected]
                        wrote last edited by
                        #62

                        You seem to be taking this a bit personally…

                        1 Reply Last reply
                        3
                        • G [email protected]

                          You guys are getting documentation?

                          kescusay@lemmy.worldK This user is from outside of this forum
                          kescusay@lemmy.worldK This user is from outside of this forum
                          [email protected]
                          wrote last edited by [email protected]
                          #63

                          Well, if I'm not, then neither is an LLM.

                          But for most projects built with modern tooling, the documentation is fine, and they mostly have simple CLIs for scaffolding a new application.

                          G 1 Reply Last reply
                          3
                          • P [email protected]

                            It comes down to the "hallucination rate" which is a very fuzzy metric, but it works pretty well - at a hallucination rate of 5% (95% successful responses) AI is just about on par with human workers - but faster for complex tasks, and slower for simple answers.

                            I have no idea what you're doing, but based on my own experience, your error/hallucination rate is like 1/10th of what I'd expect.

                            I've been using an AI assistant for the better part of a year, and I'd laugh at the idea that they're right even 60% of the time without CONSTANTLY reinforcing fucking BASIC directives or telling it to provide sources for every method it suggests. Like, I can't even keep the damned thing reliably in the language framework I'm working on without it falling back to the raw vendor CLI in project conversations. I'm correcting the exact same mistakes week after week because the thing is braindead and doesn't understand that you cannot use reserved keywords for your variable names. It just makes up parameters to core functions based on the question I ask it, regardless of documentation until I call it's bullshit and it gets super conciliatory and then actually double checks it's own work instead of authoritatively lying to me.

                            You're not wrong that AI makes human style mistakes, but a human can learn, or at least generally doesn't have to be taught the same fucking lesson at least once a week for a year (or gets fired well before then). AI is artificial, but there absolutely isn't any intelligence behind it, it's just a stochastic parrot that somehow comes to plausible answers that the algorithm expects that you want to hear.

                            M This user is from outside of this forum
                            M This user is from outside of this forum
                            [email protected]
                            wrote last edited by
                            #64

                            your error/hallucination rate is like 1/10th of what I’d expect. I’ve been using an AI assistant for the better part of a year,

                            I'm having AI write computer programs, and when I tried it a year ago I laughed and walked away - it was useless. It has improved substantially in the past 3 months.

                            CONSTANTLY reinforcing fucking BASIC directives

                            Yes, that is the "limited context window" - in my experience people have it too.

                            I have given my AIs basic workflows to follow for certain operations, simple 5 to 8 step processes, and they do them correctly about 19 times out of 20, but that 5% they'll be executing the same process and just skip a step - like many people tend to as well.

                            but a human can learn

                            In the past week I have been having my AIs "teach themselves" these workflows and priorities. Prioritizing correctness over speed, respecting document hierarchies when deciding which side of a conflict needs to be edited, etc. It seems to be helping somewhat. I had it research current best practices on context window management and apply it to my projects, and that seems to have helped a little too. But, while I type this, my AI just ran off and started implementing code based on old downstream specs that should have been updated to reflect top level changes we just made, I interrupted it and told it to go back and do it the right way, like its work instructions already tell it to. After the reminder it did it right : limited context window.

                            The main problem I have with computer programming AIs is: when you have a human work on a problem for a month, you drop by every day or two to see how it's going, clarify, course correct. The AI does the equivalent work in an hour and I just don't have the bandwidth to keep up at that speed, so it gets just as far off in the weeds as a junior programmer locked in a room and fed Jolt cola and Cheetos through a slot in the door would after a month alone.

                            An interesting response I got from my AI recently regarding this phenomenon was: it provided "training seminar" materials for our development team telling them how to proceed incrementally with the AI work and carefully review intermediate steps. I already do that with my "work side" AI project, it didn't suggest it. My home side project where I normally approve changes without review is the one that suggested the training seminar.

                            1 Reply Last reply
                            0
                            • electricblush@lemmy.worldE [email protected]

                              This.

                              It will be the baby of idiocracy and blade runner.

                              All the horrible dehumanising parts, without any of the gritty aesthetics, and every character is some kind of sadistic Elmer Fudd.

                              S This user is from outside of this forum
                              S This user is from outside of this forum
                              [email protected]
                              wrote last edited by
                              #65

                              The baby of Idiocracy and Blade Runner would be called Running While Holding A Sharp Knife I believe

                              1 Reply Last reply
                              0
                              • T [email protected]

                                creating value

                                This kind of pseudo-science is a problem.

                                There is no such thing as "value". People serve capital so they don't starve to death. There will always be a need for servants. In particular capital needs massive guard labor to violently enforce privilege and inequality.

                                The technologies falsely hyped as "AI" are no different. It's just another computer program used by capital to hoard privilege and violently control people. The potential for unemployment is mostly just more bullshit. These grifters are literally talking about how "AI" will battle the anti-christ. Insofar as some people might maybe someday lost some jobs, that's been the way that capitalism works for centuries. The poor will be enlisted, attacked, removed, etc. as usual.

                                andrewrgross@slrpnk.netA This user is from outside of this forum
                                andrewrgross@slrpnk.netA This user is from outside of this forum
                                [email protected]
                                wrote last edited by
                                #66

                                100%.

                                Peter Frase deconstructed this in an article a decade ago (and subsequent book) "Four Futures".

                                It's really not complicated. Saying 'the rich want to make us all obsolete and then kill us off ' sounds paranoid and reactionary, but if you actually study these dynamics critically that's a pretty good distillation of what they'd like to do, and they're not really concealing it.

                                P 1 Reply Last reply
                                1
                                • dojan@pawb.socialD [email protected]

                                  At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

                                  I'd not put an LLM in charge of developing a framework that is meant to be used in any sort of production environment. If we're talking about them setting up the skeleton of a project, then templates have already been around for decades at this point. You also don't really set up new projects all that often.

                                  S This user is from outside of this forum
                                  S This user is from outside of this forum
                                  [email protected]
                                  wrote last edited by
                                  #67

                                  Yup. If it takes me more than a day to get started working on business logic, that's on me. That should take max 4 hours.

                                  1 Reply Last reply
                                  0
                                  • N [email protected]

                                    I came across this article in another Lemmy community that dislikes AI. I'm reposting instead of cross posting so that we could have a conversation about how "work" might be changing with advancements in technology.

                                    The headline is clickbaity because Altman was referring to how farmers who lived decades ago might perceive that the work "you and I do today" (including Altman himself), doesn't look like work.

                                    The fact is that most of us work far abstracted from human survival by many levels. Very few of us are farming, building shelters, protecting our families from wildlife, or doing the back breaking labor jobs that humans were forced to do generations ago.

                                    In my first job, which was IT support, the concept was not lost on me that all day long I pushed buttons to make computers beep in more friendly ways. There was no physical result to see, no produce to harvest, no pile of wood being transitioned from a natural to a chopped state, nothing tangible to step back and enjoy at the end of the day.

                                    Bankers, fashion designers, artists, video game testers, software developers and countless other professions experience something quite similar. Yet, all of these jobs do in some way add value to the human experience.

                                    As humanity's core needs have been met with technology requiring fewer human inputs, our focus has been able to shift to creating value in less tangible, but perhaps not less meaningful ways. This has created a more dynamic and rich life experience than any of those previous farming generations could have imagined. So while it doesn't seem like the work those farmers were accustomed to, humanity has been able to shift its attention to other types of work for the benefit of many.

                                    I postulate that AI - as we know it now - is merely another technological tool that will allow new layers of abstraction. At one time bookkeepers had to write in books, now software automatically encodes accounting transactions as they're made. At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

                                    These days we have fewer bookkeepers - most companies don't need armies of clerks anymore. But now we have more data analysts who work to understand the information and make important decisions. In the future we may need fewer software coders, and in turn, there will be many more software projects that seek to solve new problems in new ways.

                                    How do I know this? I think history shows us that innovations in technology always bring new problems to be solved. There is an endless reservoir of challenges to be worked on that previous generations didn't have time to think about. We are going to free minds from tasks that can be automated, and many of those minds will move on to the next level of abstraction.

                                    At the end of the day, I suspect we humans are biologically wired with a deep desire to output rewarding and meaningful work, and much of the results of our abstracted work is hard to see and touch. Perhaps this is why I enjoy mowing my lawn so much, no matter how advanced robotic lawn mowing machines become.

                                    sommerset@thelemmy.clubS This user is from outside of this forum
                                    sommerset@thelemmy.clubS This user is from outside of this forum
                                    [email protected]
                                    wrote last edited by
                                    #68

                                    Use open chinese models.
                                    Qwen3 is amazing.

                                    Here's a free chatgpt alternative https://chat.qwen.ai/. Has image generation and other features

                                    Wan.video - free sora alternative

                                    1 Reply Last reply
                                    2
                                    • S [email protected]

                                      I agree with the sentiment, as bad as it feels to agree with Altman about anything.

                                      I'm working as a software developer, working on the backend of the website/loyalty app of some large retailer.

                                      My job is entirely useless. I mean, I'm doing a decent job keeping the show running, but (a) management shifts priorities all the time and about 2/3 of all the "super urgent" things I work on get cancelled before then get released and (b) if our whole department would instantly disappear and the app and webside would just be gone, nobody would care. Like, literally. We have an app and a website because everyone has to have one, not because there's a real benefit to anyone.

                                      The same is true for most of the jobs I worked in, and about most jobs in large corporations.

                                      So if AI could somehow replace all these jobs (which it can't), nothing of value would be lost, apart from the fact that our society requires everyone to have a job, bullshit or not. And these bullshit jobs even tend to be the better-paid ones.

                                      So AI doing the bullshit jobs isn't the problem, but people having to do bullshit jobs to get paid is.

                                      If we all get a really good universal basic income or something, I don't think most people would mind that they don't have to go warm a seat in an office anymore. But since we don't and we likely won't in the future, losing a job is a real problem, which makes Altman's comment extremely insensitive.

                                      andrewrgross@slrpnk.netA This user is from outside of this forum
                                      andrewrgross@slrpnk.netA This user is from outside of this forum
                                      [email protected]
                                      wrote last edited by
                                      #69

                                      Agreed. His comments are so bizarrely stupid on so many levels.

                                      They're not just "wrong": they're half-right-half-wrong. And the half that is wrong is idiotic in the extreme, while the half that is right casually acknowledges a civilizational crisis like someone watching their neighbors screaming in a house fire while sipping a cup of coffee.

                                      Like this farmer analogy: the farmers were right! Their way of life and all that mattered to them was largely exterminated by these changes, and we're living in their worst nightmare! And he even goes so far as acknowledging this, and acknowledging that we'll likely experience the same thing. We're all basically cart horses at the dawn of the automobile, and we might actually hate where this is going. But... It'll probably be great.

                                      He just has a hunch that even though all evidence suggests that this will lead to the opposite of the greatest good for the greatest number of people, for some reason his brain can't shake the sense that it's going to be good anyway. I mean, it has to be, otherwise that would make him a monster! And that simply can't be the case. So there you have it.

                                      It'll be terrible great.

                                      1 Reply Last reply
                                      1
                                      • N [email protected]

                                        I was at the Canton Fair last week which is a trade show in China where manufacturers display some of their latest technology.

                                        There was a robotics display all where they are showing off how lots of factories, kitchens, another labor-based jobs can be automated with technology.

                                        a robot that can operate a deep fryer in a restaurant

                                        This doesn't really have a lot to do with AI or LLMs, but the field of robotics is advancing fast and a lot of basic work that humans had to do in the past won't be needed as much in the future.

                                        D This user is from outside of this forum
                                        D This user is from outside of this forum
                                        [email protected]
                                        wrote last edited by
                                        #70

                                        Yeah... But rich people don't want to eat food prepared cheaply and efficiently by robots. They want 10k a plate bullshit, not peasant food. They will, however, gladly use robots for manual labor like construction and soldiering

                                        1 Reply Last reply
                                        0
                                        • dojan@pawb.socialD [email protected]

                                          At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

                                          I'd not put an LLM in charge of developing a framework that is meant to be used in any sort of production environment. If we're talking about them setting up the skeleton of a project, then templates have already been around for decades at this point. You also don't really set up new projects all that often.

                                          K This user is from outside of this forum
                                          K This user is from outside of this forum
                                          [email protected]
                                          wrote last edited by
                                          #71

                                          If we're talking about them setting up the skeleton of a project, then templates have already been around for decades at this point.

                                          That's what LLMs are good at - taking old work (without consent) and regurgitating it while pretending it's new and unique.

                                          1 Reply Last reply
                                          2
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups