Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Sam Altman Says If Jobs Gets Wiped Out, Maybe They Weren’t Even “Real Work” to Start With

Sam Altman Says If Jobs Gets Wiped Out, Maybe They Weren’t Even “Real Work” to Start With

Scheduled Pinned Locked Moved Technology
technology
123 Posts 78 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • G [email protected]

    You guys are getting documentation?

    kescusay@lemmy.worldK This user is from outside of this forum
    kescusay@lemmy.worldK This user is from outside of this forum
    [email protected]
    wrote last edited by [email protected]
    #63

    Well, if I'm not, then neither is an LLM.

    But for most projects built with modern tooling, the documentation is fine, and they mostly have simple CLIs for scaffolding a new application.

    G 1 Reply Last reply
    2
    • P [email protected]

      It comes down to the "hallucination rate" which is a very fuzzy metric, but it works pretty well - at a hallucination rate of 5% (95% successful responses) AI is just about on par with human workers - but faster for complex tasks, and slower for simple answers.

      I have no idea what you're doing, but based on my own experience, your error/hallucination rate is like 1/10th of what I'd expect.

      I've been using an AI assistant for the better part of a year, and I'd laugh at the idea that they're right even 60% of the time without CONSTANTLY reinforcing fucking BASIC directives or telling it to provide sources for every method it suggests. Like, I can't even keep the damned thing reliably in the language framework I'm working on without it falling back to the raw vendor CLI in project conversations. I'm correcting the exact same mistakes week after week because the thing is braindead and doesn't understand that you cannot use reserved keywords for your variable names. It just makes up parameters to core functions based on the question I ask it, regardless of documentation until I call it's bullshit and it gets super conciliatory and then actually double checks it's own work instead of authoritatively lying to me.

      You're not wrong that AI makes human style mistakes, but a human can learn, or at least generally doesn't have to be taught the same fucking lesson at least once a week for a year (or gets fired well before then). AI is artificial, but there absolutely isn't any intelligence behind it, it's just a stochastic parrot that somehow comes to plausible answers that the algorithm expects that you want to hear.

      M This user is from outside of this forum
      M This user is from outside of this forum
      [email protected]
      wrote last edited by
      #64

      your error/hallucination rate is like 1/10th of what I’d expect. I’ve been using an AI assistant for the better part of a year,

      I'm having AI write computer programs, and when I tried it a year ago I laughed and walked away - it was useless. It has improved substantially in the past 3 months.

      CONSTANTLY reinforcing fucking BASIC directives

      Yes, that is the "limited context window" - in my experience people have it too.

      I have given my AIs basic workflows to follow for certain operations, simple 5 to 8 step processes, and they do them correctly about 19 times out of 20, but that 5% they'll be executing the same process and just skip a step - like many people tend to as well.

      but a human can learn

      In the past week I have been having my AIs "teach themselves" these workflows and priorities. Prioritizing correctness over speed, respecting document hierarchies when deciding which side of a conflict needs to be edited, etc. It seems to be helping somewhat. I had it research current best practices on context window management and apply it to my projects, and that seems to have helped a little too. But, while I type this, my AI just ran off and started implementing code based on old downstream specs that should have been updated to reflect top level changes we just made, I interrupted it and told it to go back and do it the right way, like its work instructions already tell it to. After the reminder it did it right : limited context window.

      The main problem I have with computer programming AIs is: when you have a human work on a problem for a month, you drop by every day or two to see how it's going, clarify, course correct. The AI does the equivalent work in an hour and I just don't have the bandwidth to keep up at that speed, so it gets just as far off in the weeds as a junior programmer locked in a room and fed Jolt cola and Cheetos through a slot in the door would after a month alone.

      An interesting response I got from my AI recently regarding this phenomenon was: it provided "training seminar" materials for our development team telling them how to proceed incrementally with the AI work and carefully review intermediate steps. I already do that with my "work side" AI project, it didn't suggest it. My home side project where I normally approve changes without review is the one that suggested the training seminar.

      1 Reply Last reply
      0
      • electricblush@lemmy.worldE [email protected]

        This.

        It will be the baby of idiocracy and blade runner.

        All the horrible dehumanising parts, without any of the gritty aesthetics, and every character is some kind of sadistic Elmer Fudd.

        S This user is from outside of this forum
        S This user is from outside of this forum
        [email protected]
        wrote last edited by
        #65

        The baby of Idiocracy and Blade Runner would be called Running While Holding A Sharp Knife I believe

        1 Reply Last reply
        0
        • T [email protected]

          creating value

          This kind of pseudo-science is a problem.

          There is no such thing as "value". People serve capital so they don't starve to death. There will always be a need for servants. In particular capital needs massive guard labor to violently enforce privilege and inequality.

          The technologies falsely hyped as "AI" are no different. It's just another computer program used by capital to hoard privilege and violently control people. The potential for unemployment is mostly just more bullshit. These grifters are literally talking about how "AI" will battle the anti-christ. Insofar as some people might maybe someday lost some jobs, that's been the way that capitalism works for centuries. The poor will be enlisted, attacked, removed, etc. as usual.

          andrewrgross@slrpnk.netA This user is from outside of this forum
          andrewrgross@slrpnk.netA This user is from outside of this forum
          [email protected]
          wrote last edited by
          #66

          100%.

          Peter Frase deconstructed this in an article a decade ago (and subsequent book) "Four Futures".

          It's really not complicated. Saying 'the rich want to make us all obsolete and then kill us off ' sounds paranoid and reactionary, but if you actually study these dynamics critically that's a pretty good distillation of what they'd like to do, and they're not really concealing it.

          P 1 Reply Last reply
          1
          • dojan@pawb.socialD [email protected]

            At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

            I'd not put an LLM in charge of developing a framework that is meant to be used in any sort of production environment. If we're talking about them setting up the skeleton of a project, then templates have already been around for decades at this point. You also don't really set up new projects all that often.

            S This user is from outside of this forum
            S This user is from outside of this forum
            [email protected]
            wrote last edited by
            #67

            Yup. If it takes me more than a day to get started working on business logic, that's on me. That should take max 4 hours.

            1 Reply Last reply
            0
            • N [email protected]

              I came across this article in another Lemmy community that dislikes AI. I'm reposting instead of cross posting so that we could have a conversation about how "work" might be changing with advancements in technology.

              The headline is clickbaity because Altman was referring to how farmers who lived decades ago might perceive that the work "you and I do today" (including Altman himself), doesn't look like work.

              The fact is that most of us work far abstracted from human survival by many levels. Very few of us are farming, building shelters, protecting our families from wildlife, or doing the back breaking labor jobs that humans were forced to do generations ago.

              In my first job, which was IT support, the concept was not lost on me that all day long I pushed buttons to make computers beep in more friendly ways. There was no physical result to see, no produce to harvest, no pile of wood being transitioned from a natural to a chopped state, nothing tangible to step back and enjoy at the end of the day.

              Bankers, fashion designers, artists, video game testers, software developers and countless other professions experience something quite similar. Yet, all of these jobs do in some way add value to the human experience.

              As humanity's core needs have been met with technology requiring fewer human inputs, our focus has been able to shift to creating value in less tangible, but perhaps not less meaningful ways. This has created a more dynamic and rich life experience than any of those previous farming generations could have imagined. So while it doesn't seem like the work those farmers were accustomed to, humanity has been able to shift its attention to other types of work for the benefit of many.

              I postulate that AI - as we know it now - is merely another technological tool that will allow new layers of abstraction. At one time bookkeepers had to write in books, now software automatically encodes accounting transactions as they're made. At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

              These days we have fewer bookkeepers - most companies don't need armies of clerks anymore. But now we have more data analysts who work to understand the information and make important decisions. In the future we may need fewer software coders, and in turn, there will be many more software projects that seek to solve new problems in new ways.

              How do I know this? I think history shows us that innovations in technology always bring new problems to be solved. There is an endless reservoir of challenges to be worked on that previous generations didn't have time to think about. We are going to free minds from tasks that can be automated, and many of those minds will move on to the next level of abstraction.

              At the end of the day, I suspect we humans are biologically wired with a deep desire to output rewarding and meaningful work, and much of the results of our abstracted work is hard to see and touch. Perhaps this is why I enjoy mowing my lawn so much, no matter how advanced robotic lawn mowing machines become.

              sommerset@thelemmy.clubS This user is from outside of this forum
              sommerset@thelemmy.clubS This user is from outside of this forum
              [email protected]
              wrote last edited by
              #68

              Use open chinese models.
              Qwen3 is amazing.

              Here's a free chatgpt alternative https://chat.qwen.ai/. Has image generation and other features

              Wan.video - free sora alternative

              1 Reply Last reply
              2
              • S [email protected]

                I agree with the sentiment, as bad as it feels to agree with Altman about anything.

                I'm working as a software developer, working on the backend of the website/loyalty app of some large retailer.

                My job is entirely useless. I mean, I'm doing a decent job keeping the show running, but (a) management shifts priorities all the time and about 2/3 of all the "super urgent" things I work on get cancelled before then get released and (b) if our whole department would instantly disappear and the app and webside would just be gone, nobody would care. Like, literally. We have an app and a website because everyone has to have one, not because there's a real benefit to anyone.

                The same is true for most of the jobs I worked in, and about most jobs in large corporations.

                So if AI could somehow replace all these jobs (which it can't), nothing of value would be lost, apart from the fact that our society requires everyone to have a job, bullshit or not. And these bullshit jobs even tend to be the better-paid ones.

                So AI doing the bullshit jobs isn't the problem, but people having to do bullshit jobs to get paid is.

                If we all get a really good universal basic income or something, I don't think most people would mind that they don't have to go warm a seat in an office anymore. But since we don't and we likely won't in the future, losing a job is a real problem, which makes Altman's comment extremely insensitive.

                andrewrgross@slrpnk.netA This user is from outside of this forum
                andrewrgross@slrpnk.netA This user is from outside of this forum
                [email protected]
                wrote last edited by
                #69

                Agreed. His comments are so bizarrely stupid on so many levels.

                They're not just "wrong": they're half-right-half-wrong. And the half that is wrong is idiotic in the extreme, while the half that is right casually acknowledges a civilizational crisis like someone watching their neighbors screaming in a house fire while sipping a cup of coffee.

                Like this farmer analogy: the farmers were right! Their way of life and all that mattered to them was largely exterminated by these changes, and we're living in their worst nightmare! And he even goes so far as acknowledging this, and acknowledging that we'll likely experience the same thing. We're all basically cart horses at the dawn of the automobile, and we might actually hate where this is going. But... It'll probably be great.

                He just has a hunch that even though all evidence suggests that this will lead to the opposite of the greatest good for the greatest number of people, for some reason his brain can't shake the sense that it's going to be good anyway. I mean, it has to be, otherwise that would make him a monster! And that simply can't be the case. So there you have it.

                It'll be terrible great.

                1 Reply Last reply
                1
                • N [email protected]

                  I was at the Canton Fair last week which is a trade show in China where manufacturers display some of their latest technology.

                  There was a robotics display all where they are showing off how lots of factories, kitchens, another labor-based jobs can be automated with technology.

                  a robot that can operate a deep fryer in a restaurant

                  This doesn't really have a lot to do with AI or LLMs, but the field of robotics is advancing fast and a lot of basic work that humans had to do in the past won't be needed as much in the future.

                  D This user is from outside of this forum
                  D This user is from outside of this forum
                  [email protected]
                  wrote last edited by
                  #70

                  Yeah... But rich people don't want to eat food prepared cheaply and efficiently by robots. They want 10k a plate bullshit, not peasant food. They will, however, gladly use robots for manual labor like construction and soldiering

                  1 Reply Last reply
                  0
                  • dojan@pawb.socialD [email protected]

                    At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

                    I'd not put an LLM in charge of developing a framework that is meant to be used in any sort of production environment. If we're talking about them setting up the skeleton of a project, then templates have already been around for decades at this point. You also don't really set up new projects all that often.

                    K This user is from outside of this forum
                    K This user is from outside of this forum
                    [email protected]
                    wrote last edited by
                    #71

                    If we're talking about them setting up the skeleton of a project, then templates have already been around for decades at this point.

                    That's what LLMs are good at - taking old work (without consent) and regurgitating it while pretending it's new and unique.

                    1 Reply Last reply
                    2
                    • dojan@pawb.socialD [email protected]

                      At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

                      I'd not put an LLM in charge of developing a framework that is meant to be used in any sort of production environment. If we're talking about them setting up the skeleton of a project, then templates have already been around for decades at this point. You also don't really set up new projects all that often.

                      C This user is from outside of this forum
                      C This user is from outside of this forum
                      [email protected]
                      wrote last edited by
                      #72

                      I tried to demo an agentic AI in Jetbrains to a coworker, just as a "hey look at this neat thing that can make changes on its own". As the example I told it to convert a constructor in c# to a primary constructor.

                      So it "thought" and made the change, "thought" again and reverted the change, "thought" once again and made the change again, then it "thought" for a 4th time and reverted the changes again. I stopped it there and just shook my head.

                      M 1 Reply Last reply
                      2
                      • N [email protected]

                        I came across this article in another Lemmy community that dislikes AI. I'm reposting instead of cross posting so that we could have a conversation about how "work" might be changing with advancements in technology.

                        The headline is clickbaity because Altman was referring to how farmers who lived decades ago might perceive that the work "you and I do today" (including Altman himself), doesn't look like work.

                        The fact is that most of us work far abstracted from human survival by many levels. Very few of us are farming, building shelters, protecting our families from wildlife, or doing the back breaking labor jobs that humans were forced to do generations ago.

                        In my first job, which was IT support, the concept was not lost on me that all day long I pushed buttons to make computers beep in more friendly ways. There was no physical result to see, no produce to harvest, no pile of wood being transitioned from a natural to a chopped state, nothing tangible to step back and enjoy at the end of the day.

                        Bankers, fashion designers, artists, video game testers, software developers and countless other professions experience something quite similar. Yet, all of these jobs do in some way add value to the human experience.

                        As humanity's core needs have been met with technology requiring fewer human inputs, our focus has been able to shift to creating value in less tangible, but perhaps not less meaningful ways. This has created a more dynamic and rich life experience than any of those previous farming generations could have imagined. So while it doesn't seem like the work those farmers were accustomed to, humanity has been able to shift its attention to other types of work for the benefit of many.

                        I postulate that AI - as we know it now - is merely another technological tool that will allow new layers of abstraction. At one time bookkeepers had to write in books, now software automatically encodes accounting transactions as they're made. At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

                        These days we have fewer bookkeepers - most companies don't need armies of clerks anymore. But now we have more data analysts who work to understand the information and make important decisions. In the future we may need fewer software coders, and in turn, there will be many more software projects that seek to solve new problems in new ways.

                        How do I know this? I think history shows us that innovations in technology always bring new problems to be solved. There is an endless reservoir of challenges to be worked on that previous generations didn't have time to think about. We are going to free minds from tasks that can be automated, and many of those minds will move on to the next level of abstraction.

                        At the end of the day, I suspect we humans are biologically wired with a deep desire to output rewarding and meaningful work, and much of the results of our abstracted work is hard to see and touch. Perhaps this is why I enjoy mowing my lawn so much, no matter how advanced robotic lawn mowing machines become.

                        M This user is from outside of this forum
                        M This user is from outside of this forum
                        [email protected]
                        wrote last edited by
                        #73

                        I have been working with computers, and networks, and the internet since the 1980s. Over this span of 40-ish years, "how I work" has evolved dramatically through changes in how computers work and more dramatically through changes in information availability. In 1988 if you wanted to program an RS-232 port to send and receive data, you read books. You physically traveled to libraries, or bookstores - maybe you might mail order one, but that was even slower. Compared to today the relative costs to gain the knowledge to be able to perform the task were enormous, in time invested, money spent, and physical resources (paper, gasoline, vehicle operating costs).

                        By 20 years ago, the internet had reformulated that equation tremendously. Near instant access to worldwide data, organized enough to be easier to access than a traditional library or bookstore, and you never needed to leave your chair to get it. There was still the investment of reading and understanding the material, and a not insignificant cost of finding the relevant material through search, but the process was accelerated from days or more to hours or less, depending on the nature of the learning task.

                        A year ago, AI hallucination rates made them curious toys for me - too unreliable to be of net practical value. Today, in the field of computer programming, the hallucination rate has dropped to a very interesting point: almost the same as working with a not-so-great but still useful human colleague. The difference being: where a human colleague might take 40 hours to perform a given task (not that the colleague is slow, just it's a 40 hour task for an average human worker), the AI can turn around the same programming task in 2 hours or less.

                        Humans make mistakes, they get off on their own tracks and waste time following dead ends. This is why we have meetings. Not that meetings are the answer to everything, but at least they keep us somewhat aware of what other members of the team are doing. That not so great programmer working on a 40 hour task is much more likely to create a valuable product if you check in with them every day or so, see "how's it going", help them clarify points of confusion, check their understanding and direction of work completed so far. That's 4 check points of 15 minutes to an hour in the middle of the 40 hour process. My newest AI colleagues are ripping through those 40 hour tasks in 2 hours, impressive, and when I don't put in the additional 2 hours of managing them through the process, they get off the rails, wrapped around the axles, unable to finish a perfectly reasonable task because their limited context windows don't keep all the important points in focus throughout the process. A bigger difficulty is that I don't get 23 hours of "offline wetware processing" between touch points to refine my own understanding of the problems and desired outcomes.

                        Humans have developed software development processes to help manage human shortcomings, humans' limited attention spans and memory. We still out-perform AI in some of this context window span thing, but we have our own non-zero hallucination rates. Asking an AI chatbot to write a program one conversational prompt at a time only gets me so far. Providing an AI with a more mature software development process to follow gets much farther. AI isn't following these processes (that it helped to translate from human concepts into its own language of workflows, skills, etc.) 100% perfectly, I catch it skipping steps in simple 5 step workflows, but like human procedures, there's a closed loop procedure improvement procedure to help perform better in the future.

                        Perhaps most importantly, the procedures are constantly reminding AI to be "self aware" of its context window limitations, do RAG (research augmented generation) of best practices for context management, DRY (reduce through non-repetition and use of references to single points of truth) its own procedures and documentation it generates. Will I succeed in having AI rebuild a 6 month project I did five years back, doing it better this time - expanding its scope to what would have been a year long development effort if I had continued doing it solo? Unclear, I'm two weeks in and I feel like I'm about where I was after two weeks of development last time, but it also feels like I have a better foundation to complete the bigger scope this time using the AI tools, and there's that tantalizing possibility that at any point now it might just take off and finish it by itself.

                        1 Reply Last reply
                        8
                        • T [email protected]

                          creating value

                          This kind of pseudo-science is a problem.

                          There is no such thing as "value". People serve capital so they don't starve to death. There will always be a need for servants. In particular capital needs massive guard labor to violently enforce privilege and inequality.

                          The technologies falsely hyped as "AI" are no different. It's just another computer program used by capital to hoard privilege and violently control people. The potential for unemployment is mostly just more bullshit. These grifters are literally talking about how "AI" will battle the anti-christ. Insofar as some people might maybe someday lost some jobs, that's been the way that capitalism works for centuries. The poor will be enlisted, attacked, removed, etc. as usual.

                          S This user is from outside of this forum
                          S This user is from outside of this forum
                          [email protected]
                          wrote last edited by
                          #74

                          There is no such thing as “value”

                          There is. Value is getting more for less. Horses created value for farmers by plowing more land for the same amount of human effort, and that required a higher initial investment into the horse. Tractors increased that value by reducing time spent plowing as well as eliminating horse maintenance, and that required a higher initial investment into the tractor. And so on.

                          Value is tangible and quantifiable. You calculate initial and ongoing costs (investments, subscriptions, etc) and compare to improvements in yield, and value is the yield divided by the costs before and after the change. If AI helps you sell more products and that increase exceeds the additional cost of the AI tool, or you produce a similar amount of products with less cost (i.e. AI actually replaces jobs), then that's value.

                          There will always be a need for servants

                          True. I don't understand the comparison to value though, servants will exist regardless of the value created by some change. If value is high enough, the servants will live a better life due to access to better products. But there will always be a division between those in power and those not in power.

                          This is true regardless of the economic system. As long as there is a separation of those in power and those not in power, there will be a gap in means, and AFAICT we've never known a time where that gap hasn't existed.

                          That said, being a servant isn't necessarily bad. The poor in the US and Europe are far better off than the poor in much of the rest of the world. Why? If a society is able to provide stability, everyone is better off. Generally speaking, someone needs to have more power than others in order to produce stability, because that power gap is essential to stop others from usurping power. Why is Africa so screwed up? Because groups are able to challenge the power of the government and take for themselves outside the legal framework. They run mines operated through child labor, threaten law enforcement, etc, all to get more value than they would otherwise be allowed to get working through legal channels.

                          These grifters are literally talking about how “AI” will battle the anti-christ.

                          They're selling a product. When was the last time you saw an advertisement that didn't overhype the product? I just watched a bunch of ads for a KFC chicken sandwich while watching some big college football games. If I go to KFC and buy that chicken sandwich, will it look anything like what they showed in the ad? Absolutely not!

                          Some execs will get sucked into that pitch, and others will be more careful in testing the product out to know what value it actually offers. We're doing that right now in my company. Basically, we have an outside contractor that has been supporting one of our products for several years now, and we're pushing them to demonstrate a substantial increase in productivity using AI tools. They get paid based on demonstrating that improvement. I think that's the right way to do it, you test a new product out, and if it delivers as advertised, you increase use of it, and if it doesn't, you stop using it. Don't jump all in based on some marketing, jump all in based on actual experience.

                          Insofar as some people might maybe someday lost some jobs, that’s been the way that capitalism works for centuries. The poor will be enlisted, attacked, removed, etc. as usual.

                          The way capitalism has worked for centuries is that some new tech replaces a bunch of jobs, and that creates opportunities for new jobs, often w/ higher pay (sometimes not).

                          Look at the cotton gin. Before this point, cotton wasn't very profitable, since it was too labor intensive to harvest. When the cotton gin was invented in 1793, it completely transformed the economy of the southern US and drove up demand for slavery since there was now a ton of money to be made harvesting cotton. If slavery was illegal, this would instead drive up demand for paid farm hands. Here's an article about cotton exports before and after the civil war, exports increased in the early 1800s as more and more farms adopted cotton gins, stalled during the civil war, and continued its increase after the civil war (after slaves were freed). Here's an image that shows that:

                          This has happened in a number of other areas, such as:

                          • digital computers replacing manual computers (paper and pencil)
                          • automated recording and transcription services replaced manual transcription in a number of fields
                          • internet and GPS/online maps replacing yellowpages and newspapers, thus eliminating delivery jobs
                          • printing press replacing manual transcription jobs
                          • light bulb replaced the gas infrastructure in cities and the jobs to go light city lights

                          Each of those inventions killed tons of jobs, while creating new jobs. Instead of having people manually calculating things like artillery trajectories, we now have IT, software development, and analysis jobs. Instead of people going around town every night to light lamps, they can manage electrical distribution to entire cities, monitor the status of the grid (i.e. getting a message that a bulb has died), etc.

                          My point here is that the technological advancement created jobs because it made an industry more profitable. Everyone wanted cotton shirts because the old standard, wool, sucked, especially in the summer. We have an IT industry with tons of subfields because we don't need a room full of computers to calculate things like artillery trajectories and financial results. We have a widespread electrical network that brings light, climate control, and computation to individual homes at the cost of a few jobs lighting street lamps. The constant here is that employers want to get the most value they can (i.e. prefer unpaid slaves to paid farm hands), but they will invest in increased production if that increases profits proportionally.

                          I'm not too worried about the job market w/ AI. Yeah, it'll replace some jobs, but it'll create others. If running a content distribution network is less expensive, perhaps they can afford to take on less profitable content (i.e. indie content), which creates more content creation jobs. If running a search engine is cheaper, perhaps it makes sense to make lots of niche search engines that help discover content w/ less ad budget, and that improves profitability of smaller creators. If building new software projects is cheaper, perhaps they can pay software architects more to oversee more projects (fewer employees, more lucrative roles). And so on.

                          Generally speaking, innovations that increase value tend to apply downward pressure on prices because that lowers the barrier to competition, which means increased job creation and often better distribution of wealth. The transition period is certainly hard, but the net result after is usually positive.

                          1 Reply Last reply
                          0
                          • M [email protected]

                            What do i even answer here...

                            Who talks even about computer scientists? It's the public and especially company bosses who get wrong expectations about "intelligence". It's about psychology, not about scientifically correct names.

                            S This user is from outside of this forum
                            S This user is from outside of this forum
                            [email protected]
                            wrote last edited by [email protected]
                            #75

                            The solution to the public misusing technical terms isn't to change the technical terms, but to educate the public. All of the following fall under AI:

                            • pathing algorithms of computer opponents, but probably not the decisions that computer opponents make (i.e. who to attack; that's usually based on manually specified logic)
                            • the speech to text your phone used before Gemeni or whatever it's called now on Android (Gemeni is also AI, just a different type of AI)
                            • home camera systems that can detect people vs animals, and sometimes classify those animals by species
                            • DDOS protection systems and load balancers for websites probably use some type of AI

                            AI is a broad field, and you probably interact with non-LLM variants every day, whether you notice or not. Here's a Wikipedia article that goes through a lot of it. LLMs/GPT are merely one small subfield in the larger field of AI.

                            I don't understand how people went from calling the computer player in their game "AI" (or even older, "CPU"), which nobody mistook for actual intelligence, to now people believing AI means something is sentient. Maybe it's because LLMs are more convincing since they do a much better job at languages, idk, but it's the same category of thing under the hood. ChatGPT isn't "thinking," and when it claims to "think," it's basically turning a prompt into a set of things to "think" about (basically generates and answers related prompts), and then uses that set of things in its context to provide an answer. It's not actually "thinking" as people do, it's merely following a set of statistically-motivated steps based on your prompt to generate a relevant answer. It's a lot more complex than that Warcraft 2 bot you played against as a kid, but it's still following steps a human designed, along with some statistical methods to adapt to things the developer didn't encounter.

                            1 Reply Last reply
                            2
                            • S [email protected]

                              I agree with the sentiment, as bad as it feels to agree with Altman about anything.

                              I'm working as a software developer, working on the backend of the website/loyalty app of some large retailer.

                              My job is entirely useless. I mean, I'm doing a decent job keeping the show running, but (a) management shifts priorities all the time and about 2/3 of all the "super urgent" things I work on get cancelled before then get released and (b) if our whole department would instantly disappear and the app and webside would just be gone, nobody would care. Like, literally. We have an app and a website because everyone has to have one, not because there's a real benefit to anyone.

                              The same is true for most of the jobs I worked in, and about most jobs in large corporations.

                              So if AI could somehow replace all these jobs (which it can't), nothing of value would be lost, apart from the fact that our society requires everyone to have a job, bullshit or not. And these bullshit jobs even tend to be the better-paid ones.

                              So AI doing the bullshit jobs isn't the problem, but people having to do bullshit jobs to get paid is.

                              If we all get a really good universal basic income or something, I don't think most people would mind that they don't have to go warm a seat in an office anymore. But since we don't and we likely won't in the future, losing a job is a real problem, which makes Altman's comment extremely insensitive.

                              S This user is from outside of this forum
                              S This user is from outside of this forum
                              [email protected]
                              wrote last edited by
                              #76

                              The same is true for most of the jobs I worked in, and about most jobs in large corporations.

                              I don't think that's necessarily true.

                              My job started as a relatively BS job. Basically, the company I work for makes physical things, and the people who use those physical things need to create reports to keep the regulators happy. So my first couple years on the job was improving the report generation app, which was kind of useful since it saved people an hour or two a week in producing reports. But the main reason we had this app in the first place was because our competitors had one, and the company needed a digital product to point to in order to sell customers (who didn't use the app, someone a few layers down did) on it. Basically, my job existed to check a box.

                              However, my department went above and beyond and created tools to optimize our customers' businesses. We went past reporting and built simulations related to reporting, but that brought actual value. They could reduce or increase use of our product based on actual numbers, and that change would increase their profitability (more widgets produced per dollar spent). When the company did a round of cost cutting, they took a look at our department ready to axe us, but instead increased our funding when they saw the potential of our simulations, and now we're making using the app standard for all of our on-staff consultants and front-and-center for all new customer acquisitions (i.e. not just reporting, but showcasing our app as central to the customer's business).

                              All that has happened over the last year or so, so I guess we'll see if that actually increases customer retention and acquisition. My point is that my job transitioned from something mostly useless (glorified PDF generator) to something that actually provides value to the business and likely reduces costs downstream (that's about 3 steps away from the retail store, but it could help cut prices a few percent on certain products while improving profits for us and our customers).

                              If we all get a really good universal basic income or something

                              I disagree with your assertion that many jobs exist because people need jobs. I think jobs exist because even "BS" job create value. If there was a labor surplus today, jobs would be created the lower cost of labor acquisition makes certain products profitable that wouldn't otherwise be.

                              That said, I am 100% a fan of something like UBI, though I personally would make it based on income (i.e. a Negative Income Tax, so only those under $X get the benefit), but that's mostly to make the dollar amount of that program less scary. For example, there are ~130M households in the US (current pop is 342M, or about 2.6 people per household). The poverty line is $32,150 for a family, and sending that out as UBI would cost ~4.1T, which is almost as much as the current US budget. If we instead brought everyone to the poverty line through something like NIT, that's only ~168B, or about 4% of the current budget.

                              Regardless of the approach, I think ensuring everyone is between the poverty line (i.e. unemployed people) and a living wage (i.e. minimum wage people) is a good idea for a few reasons:

                              • allows you to quit your BS job and not be screwed - puts pressure on employers at low-paying jobs to provide a better work experience and pay
                              • allows us to distribute other benefits in dollars instead of services - this book opened my eyes to how much poor people want cash, not benefits; it's easier to move if you have $1k/month in rent allowance than stuck in your section 8 (government assisted) housing
                              • could eliminate the federal minimum wage - if employers aren't paying well, people won't take the job because they'd rather take the gov't handout, so I'd consider the UBI/NIT to be the minimum wage instead
                              • encourages entrepreneurs to start businesses - my main reason for not starting a business in worries about not being able to cover my basic needs; UBI/NIT covers that, so I probably would have started a few small businesses if I had that as a fallback
                              • can replace Social Security (or other gov't pension plan), since retirees can treat UBI/NIT as their pension, and not be restricted to a specific age to take it (benefits would be lower, but very predictable)

                              Giving people a backup plan encourages people to take more risks, which should result in more value across the economy.

                              1 Reply Last reply
                              0
                              • N [email protected]

                                I came across this article in another Lemmy community that dislikes AI. I'm reposting instead of cross posting so that we could have a conversation about how "work" might be changing with advancements in technology.

                                The headline is clickbaity because Altman was referring to how farmers who lived decades ago might perceive that the work "you and I do today" (including Altman himself), doesn't look like work.

                                The fact is that most of us work far abstracted from human survival by many levels. Very few of us are farming, building shelters, protecting our families from wildlife, or doing the back breaking labor jobs that humans were forced to do generations ago.

                                In my first job, which was IT support, the concept was not lost on me that all day long I pushed buttons to make computers beep in more friendly ways. There was no physical result to see, no produce to harvest, no pile of wood being transitioned from a natural to a chopped state, nothing tangible to step back and enjoy at the end of the day.

                                Bankers, fashion designers, artists, video game testers, software developers and countless other professions experience something quite similar. Yet, all of these jobs do in some way add value to the human experience.

                                As humanity's core needs have been met with technology requiring fewer human inputs, our focus has been able to shift to creating value in less tangible, but perhaps not less meaningful ways. This has created a more dynamic and rich life experience than any of those previous farming generations could have imagined. So while it doesn't seem like the work those farmers were accustomed to, humanity has been able to shift its attention to other types of work for the benefit of many.

                                I postulate that AI - as we know it now - is merely another technological tool that will allow new layers of abstraction. At one time bookkeepers had to write in books, now software automatically encodes accounting transactions as they're made. At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

                                These days we have fewer bookkeepers - most companies don't need armies of clerks anymore. But now we have more data analysts who work to understand the information and make important decisions. In the future we may need fewer software coders, and in turn, there will be many more software projects that seek to solve new problems in new ways.

                                How do I know this? I think history shows us that innovations in technology always bring new problems to be solved. There is an endless reservoir of challenges to be worked on that previous generations didn't have time to think about. We are going to free minds from tasks that can be automated, and many of those minds will move on to the next level of abstraction.

                                At the end of the day, I suspect we humans are biologically wired with a deep desire to output rewarding and meaningful work, and much of the results of our abstracted work is hard to see and touch. Perhaps this is why I enjoy mowing my lawn so much, no matter how advanced robotic lawn mowing machines become.

                                dreadbeef@lemmy.dbzer0.comD This user is from outside of this forum
                                dreadbeef@lemmy.dbzer0.comD This user is from outside of this forum
                                [email protected]
                                wrote last edited by
                                #77

                                then what is ai doing for companies if not real work

                                1 Reply Last reply
                                0
                                • N [email protected]

                                  I came across this article in another Lemmy community that dislikes AI. I'm reposting instead of cross posting so that we could have a conversation about how "work" might be changing with advancements in technology.

                                  The headline is clickbaity because Altman was referring to how farmers who lived decades ago might perceive that the work "you and I do today" (including Altman himself), doesn't look like work.

                                  The fact is that most of us work far abstracted from human survival by many levels. Very few of us are farming, building shelters, protecting our families from wildlife, or doing the back breaking labor jobs that humans were forced to do generations ago.

                                  In my first job, which was IT support, the concept was not lost on me that all day long I pushed buttons to make computers beep in more friendly ways. There was no physical result to see, no produce to harvest, no pile of wood being transitioned from a natural to a chopped state, nothing tangible to step back and enjoy at the end of the day.

                                  Bankers, fashion designers, artists, video game testers, software developers and countless other professions experience something quite similar. Yet, all of these jobs do in some way add value to the human experience.

                                  As humanity's core needs have been met with technology requiring fewer human inputs, our focus has been able to shift to creating value in less tangible, but perhaps not less meaningful ways. This has created a more dynamic and rich life experience than any of those previous farming generations could have imagined. So while it doesn't seem like the work those farmers were accustomed to, humanity has been able to shift its attention to other types of work for the benefit of many.

                                  I postulate that AI - as we know it now - is merely another technological tool that will allow new layers of abstraction. At one time bookkeepers had to write in books, now software automatically encodes accounting transactions as they're made. At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

                                  These days we have fewer bookkeepers - most companies don't need armies of clerks anymore. But now we have more data analysts who work to understand the information and make important decisions. In the future we may need fewer software coders, and in turn, there will be many more software projects that seek to solve new problems in new ways.

                                  How do I know this? I think history shows us that innovations in technology always bring new problems to be solved. There is an endless reservoir of challenges to be worked on that previous generations didn't have time to think about. We are going to free minds from tasks that can be automated, and many of those minds will move on to the next level of abstraction.

                                  At the end of the day, I suspect we humans are biologically wired with a deep desire to output rewarding and meaningful work, and much of the results of our abstracted work is hard to see and touch. Perhaps this is why I enjoy mowing my lawn so much, no matter how advanced robotic lawn mowing machines become.

                                  curious_canid@lemmy.caC This user is from outside of this forum
                                  curious_canid@lemmy.caC This user is from outside of this forum
                                  [email protected]
                                  wrote last edited by
                                  #78

                                  Sam Altman is a huckster, not a technologist. As such, I don't really care what he says about technology. His purpose has always been to transfer as much money as possible from investors into his own pocket before the bubble bursts. Anything else is incidental.

                                  I am not entirely writing off LLMs, but very little of the discussion about them has been rational. They do some things fairly well and a lot of things quite poorly. It would be nice if we could just focus on the former.

                                  T 1 Reply Last reply
                                  20
                                  • N [email protected]

                                    I came across this article in another Lemmy community that dislikes AI. I'm reposting instead of cross posting so that we could have a conversation about how "work" might be changing with advancements in technology.

                                    The headline is clickbaity because Altman was referring to how farmers who lived decades ago might perceive that the work "you and I do today" (including Altman himself), doesn't look like work.

                                    The fact is that most of us work far abstracted from human survival by many levels. Very few of us are farming, building shelters, protecting our families from wildlife, or doing the back breaking labor jobs that humans were forced to do generations ago.

                                    In my first job, which was IT support, the concept was not lost on me that all day long I pushed buttons to make computers beep in more friendly ways. There was no physical result to see, no produce to harvest, no pile of wood being transitioned from a natural to a chopped state, nothing tangible to step back and enjoy at the end of the day.

                                    Bankers, fashion designers, artists, video game testers, software developers and countless other professions experience something quite similar. Yet, all of these jobs do in some way add value to the human experience.

                                    As humanity's core needs have been met with technology requiring fewer human inputs, our focus has been able to shift to creating value in less tangible, but perhaps not less meaningful ways. This has created a more dynamic and rich life experience than any of those previous farming generations could have imagined. So while it doesn't seem like the work those farmers were accustomed to, humanity has been able to shift its attention to other types of work for the benefit of many.

                                    I postulate that AI - as we know it now - is merely another technological tool that will allow new layers of abstraction. At one time bookkeepers had to write in books, now software automatically encodes accounting transactions as they're made. At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

                                    These days we have fewer bookkeepers - most companies don't need armies of clerks anymore. But now we have more data analysts who work to understand the information and make important decisions. In the future we may need fewer software coders, and in turn, there will be many more software projects that seek to solve new problems in new ways.

                                    How do I know this? I think history shows us that innovations in technology always bring new problems to be solved. There is an endless reservoir of challenges to be worked on that previous generations didn't have time to think about. We are going to free minds from tasks that can be automated, and many of those minds will move on to the next level of abstraction.

                                    At the end of the day, I suspect we humans are biologically wired with a deep desire to output rewarding and meaningful work, and much of the results of our abstracted work is hard to see and touch. Perhaps this is why I enjoy mowing my lawn so much, no matter how advanced robotic lawn mowing machines become.

                                    1984@lemmy.today1 This user is from outside of this forum
                                    1984@lemmy.today1 This user is from outside of this forum
                                    [email protected]
                                    wrote last edited by [email protected]
                                    #79

                                    The guys name is too perfect.

                                    Altman. Alternative man.

                                    Just not a good alternative.

                                    After his extreamly creepy interview with Tucker Carlsson about that whistleblower who died, I know he is not right in the head.

                                    1 Reply Last reply
                                    3
                                    • N [email protected]

                                      I came across this article in another Lemmy community that dislikes AI. I'm reposting instead of cross posting so that we could have a conversation about how "work" might be changing with advancements in technology.

                                      The headline is clickbaity because Altman was referring to how farmers who lived decades ago might perceive that the work "you and I do today" (including Altman himself), doesn't look like work.

                                      The fact is that most of us work far abstracted from human survival by many levels. Very few of us are farming, building shelters, protecting our families from wildlife, or doing the back breaking labor jobs that humans were forced to do generations ago.

                                      In my first job, which was IT support, the concept was not lost on me that all day long I pushed buttons to make computers beep in more friendly ways. There was no physical result to see, no produce to harvest, no pile of wood being transitioned from a natural to a chopped state, nothing tangible to step back and enjoy at the end of the day.

                                      Bankers, fashion designers, artists, video game testers, software developers and countless other professions experience something quite similar. Yet, all of these jobs do in some way add value to the human experience.

                                      As humanity's core needs have been met with technology requiring fewer human inputs, our focus has been able to shift to creating value in less tangible, but perhaps not less meaningful ways. This has created a more dynamic and rich life experience than any of those previous farming generations could have imagined. So while it doesn't seem like the work those farmers were accustomed to, humanity has been able to shift its attention to other types of work for the benefit of many.

                                      I postulate that AI - as we know it now - is merely another technological tool that will allow new layers of abstraction. At one time bookkeepers had to write in books, now software automatically encodes accounting transactions as they're made. At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

                                      These days we have fewer bookkeepers - most companies don't need armies of clerks anymore. But now we have more data analysts who work to understand the information and make important decisions. In the future we may need fewer software coders, and in turn, there will be many more software projects that seek to solve new problems in new ways.

                                      How do I know this? I think history shows us that innovations in technology always bring new problems to be solved. There is an endless reservoir of challenges to be worked on that previous generations didn't have time to think about. We are going to free minds from tasks that can be automated, and many of those minds will move on to the next level of abstraction.

                                      At the end of the day, I suspect we humans are biologically wired with a deep desire to output rewarding and meaningful work, and much of the results of our abstracted work is hard to see and touch. Perhaps this is why I enjoy mowing my lawn so much, no matter how advanced robotic lawn mowing machines become.

                                      vandals_handle@lemmy.worldV This user is from outside of this forum
                                      vandals_handle@lemmy.worldV This user is from outside of this forum
                                      [email protected]
                                      wrote last edited by
                                      #80

                                      Is this where they get rid of the telephone sanitizers and middle managers?

                                      1 Reply Last reply
                                      3
                                      • dojan@pawb.socialD [email protected]

                                        At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

                                        I'd not put an LLM in charge of developing a framework that is meant to be used in any sort of production environment. If we're talking about them setting up the skeleton of a project, then templates have already been around for decades at this point. You also don't really set up new projects all that often.

                                        M This user is from outside of this forum
                                        M This user is from outside of this forum
                                        [email protected]
                                        wrote last edited by
                                        #81

                                        Most of what LLMs present as solutions have been around for decades, that's how they learned them: from source material they train to.

                                        So far, AI hasn't surprised me with anything clever or new, mostly I'm just reminding it to follow directions, and often I'm pointing out better design patterns than what it implements on the first go around.

                                        Above all: you don't trust what an LLM spits out any more than you trust a $50/hr "consultant" from the local high school computer club to give you business critical software... you test it, if you have the ability you review it at the source level, line by line. But there ARE plenty of businesses out there running "at risk" with sketchier software developers than the local computer club, OF COURSE they are going to trust AI generated code further than they should.

                                        Get the popcorn, there will be some entertaining stories about that over the coming year.

                                        1 Reply Last reply
                                        2
                                        • C [email protected]

                                          I tried to demo an agentic AI in Jetbrains to a coworker, just as a "hey look at this neat thing that can make changes on its own". As the example I told it to convert a constructor in c# to a primary constructor.

                                          So it "thought" and made the change, "thought" again and reverted the change, "thought" once again and made the change again, then it "thought" for a 4th time and reverted the changes again. I stopped it there and just shook my head.

                                          M This user is from outside of this forum
                                          M This user is from outside of this forum
                                          [email protected]
                                          wrote last edited by
                                          #82

                                          I had similar experiences a few months back, like 6-8. Since Anthropic Sonnet 4.0 things have changed significantly. 4.5 is even a bit better. Competing models have been similarly improving.

                                          1 Reply Last reply
                                          3
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups