Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Most Americans think AI won’t improve their lives, survey says

Most Americans think AI won’t improve their lives, survey says

Scheduled Pinned Locked Moved Technology
technology
201 Posts 96 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • D [email protected]

    That's the first interesting argument I'm reading here. Glad someone takes an honest stance in this discussion instead of just "rich vs poor", "but people will lose jobs" and some random conspiracies in between.

    To your comment: I agree with your sentiment that AI will make it challenging for new brains to evolve as solving difficult tasks is a problem we will encounter much less in the future. I actually never thought about it that way. I don't have a solution for that. I think it will have two outcomes: humans will lose intelligence, or humans will develop different intelligence in a way that we don't understand yet today.

    And you are bringing up efficiency. Efficiency is just a buzzword that big companies are using to replace human labor. How much more efficient is a bank where you have 4 machine and one human teller? Or a fast food restaurant where the upfront employee just delivers the food to the counter and you can only place order with a computer.

    I disagree with that. Efficiency is a universal term. And humanity has always striven to do things more efficient because it increases the likelihood of survival and quality of life in general. It's a very natural thing and you cannot stop it. Much as you cannot stop entropy. Also, I think making things more efficient is good for society. Everything becomes easier, more available, and more fun. I can see a far future where humans no longer need to work and can do whatever they want with their day. Jobs will become hobbies and family and friends are what you care about most.

    M This user is from outside of this forum
    M This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #167

    I do not agree that efficiency is good.
    If its is good, we would live like we keep pigs and chickens in meat farms. More efficient is to eat bug based protein, and why waste time on eating instead of 100% meal replacement foods.
    Why keep people with disabilities or with different "colors of skin" (insert any other thing there) from the most "efficient" ones?
    The best way to think is Matrix-esqe pods for humans and living in a simulation.
    Only bad part of that picture is that we are not needed at all.

    And these are the dark points of unlimited change.
    We all know capitalism is very bad for the majority. We know big money do not care about marginalized groups. These are all just numbers. And at the end you and I we are all numbers that can be cut. I'm probably not going to be alive, but I hope for a bright future for the upcoming generations. The problem is that I do see AI potentially darkening their skies.
    Don't get me wrong AI can be a great tool if you learn how to use it. But the benefits are not going to be in the people hands.

    We need a general society overhaul where not the profit is the only thing that matters. Efficiency is good when you burn renewable wooden pellets and you want to get the most out of the chemical reaction. Efficiency is good when you are using the minimum amount of material to build something (with 3x oversized safety measures). But efficiency in AI and in social terms are going to be a problem.

    Humans will not have worry free lives in current society. All the replaced labor keeps the earnings in the stockholders hands. But this went really far from AI. Sorry for the rant, but I do worry for the future.
    I believe blindly accepting something before even attempting to look into the pitfalls not a great idea. And we never see all the pitfalls coming.

    1 Reply Last reply
    0
    • T [email protected]

      US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.

      In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that "experts are far more positive and enthusiastic about AI than the public" and "far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years" (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).

      The public does not share this confidence. Only about 11 percent of the public says that "they are more excited than concerned about the increased use of AI in daily life." They're much more likely (51 percent) to say they're more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.

      C This user is from outside of this forum
      C This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #168

      I do as a software engineer. The fad will collapse. Software engineering hiring will increase but the pipeline of new engineers will is dry because no one wants to enter the career with companies hanging ai over everyone's heads. Basic supply and demand says my skillset will become more valuable.

      Someone will need to clean up the ai slop. I've already had similar pistons where I was brought into clean up code bases that failed being outsourced.

      Ai is simply the next iteration. The problem is always the same business doesn't know what they really want and need and have no ability to assess what has been delivered.

      M I L F 4 Replies Last reply
      0
      • C [email protected]

        I do as a software engineer. The fad will collapse. Software engineering hiring will increase but the pipeline of new engineers will is dry because no one wants to enter the career with companies hanging ai over everyone's heads. Basic supply and demand says my skillset will become more valuable.

        Someone will need to clean up the ai slop. I've already had similar pistons where I was brought into clean up code bases that failed being outsourced.

        Ai is simply the next iteration. The problem is always the same business doesn't know what they really want and need and have no ability to assess what has been delivered.

        M This user is from outside of this forum
        M This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #169

        AI can look at a bajillion examples of code and spit out its own derivative impersonation of that code.

        AI isn't good at doing a lot of other things software engineers actually do. It isn't very good at attending meetings, gathering requirements, managing projects, writing documentation for highly-industry-specific products and features that have never existed before, working user tickets, etc.

        F 1 Reply Last reply
        0
        • M [email protected]

          Theres a hell of alot more Americans than 60 million.

          ? Offline
          ? Offline
          Guest
          wrote on last edited by
          #170

          EST 346.8million according to Gemini and ChatGPT. 😂

          M 1 Reply Last reply
          0
          • N [email protected]

            They're right. What happens to the workers when they're no longer required? The horses faced a similar issue at the advent of the combustion engine. The solution? Considerably fewer horses.

            D This user is from outside of this forum
            D This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #171

            the same could be applied to humans... but then who would buy consumer goods?

            In all seriousness though the only solution is for the cost of living to go down and for a UBI to exist so that the average person can choose to not work and strikes are a legitimate threat to business because they can more feasibly last for months.

            N 1 Reply Last reply
            0
            • T [email protected]

              US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.

              In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that "experts are far more positive and enthusiastic about AI than the public" and "far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years" (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).

              The public does not share this confidence. Only about 11 percent of the public says that "they are more excited than concerned about the increased use of AI in daily life." They're much more likely (51 percent) to say they're more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.

              E This user is from outside of this forum
              E This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #172

              AI has it's place, but they need to stop trying to shoehorn it into anything and everything. It's the new "internet of things" cramming of internet connectivity into shit that doesn't need it.

              P F 2 Replies Last reply
              0
              • E [email protected]

                AI has it's place, but they need to stop trying to shoehorn it into anything and everything. It's the new "internet of things" cramming of internet connectivity into shit that doesn't need it.

                P This user is from outside of this forum
                P This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #173

                You're saying the addition of Copilot into MS Paint is anything short of revolutionary? You heretic.

                1 Reply Last reply
                0
                • A [email protected]

                  Hardly ever I come across a person more self centered and a bigger fan of virtue signaling as you. You ignored literally everything we said, and your alternative was just "sms". Even to the point of saying that the other commenter should stop talking to their 47 friends and family members.

                  N This user is from outside of this forum
                  N This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #174

                  Please tell me more about myself, since you know me so well! Not all of us bootlick for meta

                  1 Reply Last reply
                  0
                  • C [email protected]

                    I do as a software engineer. The fad will collapse. Software engineering hiring will increase but the pipeline of new engineers will is dry because no one wants to enter the career with companies hanging ai over everyone's heads. Basic supply and demand says my skillset will become more valuable.

                    Someone will need to clean up the ai slop. I've already had similar pistons where I was brought into clean up code bases that failed being outsourced.

                    Ai is simply the next iteration. The problem is always the same business doesn't know what they really want and need and have no ability to assess what has been delivered.

                    I This user is from outside of this forum
                    I This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #175

                    I too am a developer and I am sure you will agree that while the overall intelligence of models continues to rise, without a concerted focus on enhancing logic, the promise of AGI likely will remain elusive.  AI cannot really develop without the logic being dramatically improved, yet logic is rather stagnant even in the latest reasoning models when it comes to coding at least.

                    I would argue that if we had much better logic with all other metrics being the same, we would have AGI now and developer jobs would be at risk. Given the lack of discussion about the logic gaps, I do not foresee AGI arriving anytime soon even with bigger a bigger models coming.

                    C 1 Reply Last reply
                    0
                    • ? Guest

                      EST 346.8million according to Gemini and ChatGPT. 😂

                      M This user is from outside of this forum
                      M This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #176

                      Bruh what are you even arguing? AI shouldnt be in everything just because, it needs to be reliable and have a legit need.

                      1 Reply Last reply
                      0
                      • M [email protected]

                        I'm about 50/50 between helpful results and "nope, that's not it, either" out of the various AI tools I have used.

                        I think it very much depends on what you're trying to do with it. As a student, or fresh-grad employee in a typical field, it's probably much more helpful because you are working well trod ground.

                        As a PhD or other leading edge researcher, possibly in a field without a lot of publications, you're screwed as far as the really inventive stuff goes, but... if you've read "Surely you're joking, Mr. Feynman!" there's a bit in there where the Manhattan project researchers (definitely breaking new ground at the time) needed basic stuff, like gears, for what they were doing. The gear catalogs of the day told them a lot about what they needed to know - per the text: if you're making something that needs gears, pick your gears from the catalog but just avoid the largest and smallest of each family/table - they are there because the next size up or down is getting into some kind of problems engineering wise, so just stay away from the edges and you should have much more reliable results. That's an engineer's shortcut for how to use thousands, maybe millions, of man-years of prior gear research, development and engineering and get the desired results just by referencing a catalog.

                        S This user is from outside of this forum
                        S This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #177

                        My issue is that I'm fairly established in my career, so I mostly need to reference things, which LLMs do a poor job at. As in, I usually need links to official documentation, not examples of how to do a thing.

                        That’s an engineer’s shortcut for how to use thousands, maybe millions, of man-years of prior gear research, development and engineering and get the desired results just by referencing a catalog.

                        LLMs aren't catalogs though, and they absolutely return different things for the same query. Search engines are tells catalogs, and they're what I reach for most of the time.

                        LLMs are good if I want an intro to a subject I don't know much about, and they help generate keywords to search for more specific information. I just don't do that all that much anymore.

                        1 Reply Last reply
                        0
                        • sheetzoos@lemmy.worldS [email protected]

                          New technologies are not the issue. The problem is billionaires will fuck it up because they can't control their insatiable fucking greed.

                          umbrella@lemmy.mlU This user is from outside of this forum
                          umbrella@lemmy.mlU This user is from outside of this forum
                          [email protected]
                          wrote on last edited by [email protected]
                          #178

                          .

                          1 Reply Last reply
                          0
                          • I [email protected]

                            I too am a developer and I am sure you will agree that while the overall intelligence of models continues to rise, without a concerted focus on enhancing logic, the promise of AGI likely will remain elusive.  AI cannot really develop without the logic being dramatically improved, yet logic is rather stagnant even in the latest reasoning models when it comes to coding at least.

                            I would argue that if we had much better logic with all other metrics being the same, we would have AGI now and developer jobs would be at risk. Given the lack of discussion about the logic gaps, I do not foresee AGI arriving anytime soon even with bigger a bigger models coming.

                            C This user is from outside of this forum
                            C This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #179

                            If we had AGI, the number of jobs that would be at risk would be enormous. But these LLMs aren't it.

                            They are language models and until someone can replace that second L with Logic, no amount of layering is going to get us there.

                            Those layers are basically all the previous AI techniques laid over the top of an LLM but anyone that has a basic understanding of languages can tell you how illogical they are.

                            1 Reply Last reply
                            0
                            • W [email protected]

                              People being economically displaced from innovation increasing productivity is good provided it happens at a reasonable place and there is a sufficient social saftey net to get those people back on their feet. Unfortunately those saftey nets dont exist everywhere and have been under attack (in the west) for the past 40 years.

                              S This user is from outside of this forum
                              S This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #180

                              Yep that's my point. That they just assumed that it must be the case when that hasn't been the outcome with innovation not coinciding with improved affordable living. Instead it's just been further class divide despite advancements.

                              Innovation is its own separate thing from human outcomes, and advancement of improved human lives needs its own care and guidance. Its not going to improve just because science and tech is improving. Otherwise humans are no different than any other disposable resource from the view of the powers that be, and will be discarded and abused without care.

                              1 Reply Last reply
                              0
                              • F [email protected]

                                Of course, they learned to code.

                                N This user is from outside of this forum
                                N This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #181

                                And became influencers

                                1 Reply Last reply
                                0
                                • D [email protected]

                                  the same could be applied to humans... but then who would buy consumer goods?

                                  In all seriousness though the only solution is for the cost of living to go down and for a UBI to exist so that the average person can choose to not work and strikes are a legitimate threat to business because they can more feasibly last for months.

                                  N This user is from outside of this forum
                                  N This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #182

                                  What's the point of producing goods for "useless eaters"?

                                  1 Reply Last reply
                                  0
                                  • T [email protected]

                                    US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.

                                    In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that "experts are far more positive and enthusiastic about AI than the public" and "far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years" (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).

                                    The public does not share this confidence. Only about 11 percent of the public says that "they are more excited than concerned about the increased use of AI in daily life." They're much more likely (51 percent) to say they're more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.

                                    H This user is from outside of this forum
                                    H This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #183

                                    I mean, it hasn't thus far.

                                    1 Reply Last reply
                                    0
                                    • C [email protected]

                                      I do as a software engineer. The fad will collapse. Software engineering hiring will increase but the pipeline of new engineers will is dry because no one wants to enter the career with companies hanging ai over everyone's heads. Basic supply and demand says my skillset will become more valuable.

                                      Someone will need to clean up the ai slop. I've already had similar pistons where I was brought into clean up code bases that failed being outsourced.

                                      Ai is simply the next iteration. The problem is always the same business doesn't know what they really want and need and have no ability to assess what has been delivered.

                                      L This user is from outside of this forum
                                      L This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #184

                                      A complete random story but, I'm on the AI team at my company. However, I do infrastructure/application rather than the AI stuff. First off, I had to convince my company to move our data scientist to this team. They had him doing DevOps work (complete mismanagement of resources). Also, the work I was doing was SO unsatisfying with AI. We weren't tweaking any models. We were just shoving shit to ChatGPT. Now it was be interesting if you're doing RAG stuff maybe or other things. However, I was "crafting" my prompt and I could not give a shit less about writing a perfect prompt. I'm typically used to coding what I want but I had to find out how to write it properly: "please don't format it like X". Like I wasn't using AI to write code, it was a service endpoint.

                                      During lunch with the AI team, they keep saying things like "we only have 10 years left at most". I was like, "but if you have AI spit out this code, if something goes wrong ... don't you need us to look into it?" they were like, "yeah but what if it can tell you exactly what the code is doing". I'm like, "but who's going to understand what it's saying ...?" "no, it can explain the type of problem to anyone".

                                      I said, I feel like I'm talking to a libertarian right now. Every response seems to be some solution that doesn't exist.

                                      1 Reply Last reply
                                      0
                                      • M [email protected]

                                        Good enough is the keyword in a lot of things. That's how fast fashion got this big.

                                        M This user is from outside of this forum
                                        M This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #185

                                        Fast fashion (and everything else in the commercial marketplace) needs to start paying for their externalized costs - starting with landfill space, but also the pollution and possibly social supports that are going into the delivery of their products. But, then, people are stupid when it comes to fashion, they'll pay all kinds of premiums if it makes them look like their friends.

                                        1 Reply Last reply
                                        0
                                        • D [email protected]

                                          You're using it wrong. My experience is different from yours. It produces transfer knowledge in the queries I ask it. Not even hundret Googl searches can replace transfer knowledge.

                                          M This user is from outside of this forum
                                          M This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #186

                                          You’re using it wrong.

                                          Your use case is different from mine.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups