Most Americans think AI won’t improve their lives, survey says
-
US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.
In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that "experts are far more positive and enthusiastic about AI than the public" and "far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years" (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).
The public does not share this confidence. Only about 11 percent of the public says that "they are more excited than concerned about the increased use of AI in daily life." They're much more likely (51 percent) to say they're more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.
I mean, it hasn't thus far.
-
I do as a software engineer. The fad will collapse. Software engineering hiring will increase but the pipeline of new engineers will is dry because no one wants to enter the career with companies hanging ai over everyone's heads. Basic supply and demand says my skillset will become more valuable.
Someone will need to clean up the ai slop. I've already had similar pistons where I was brought into clean up code bases that failed being outsourced.
Ai is simply the next iteration. The problem is always the same business doesn't know what they really want and need and have no ability to assess what has been delivered.
A complete random story but, I'm on the AI team at my company. However, I do infrastructure/application rather than the AI stuff. First off, I had to convince my company to move our data scientist to this team. They had him doing DevOps work (complete mismanagement of resources). Also, the work I was doing was SO unsatisfying with AI. We weren't tweaking any models. We were just shoving shit to ChatGPT. Now it was be interesting if you're doing RAG stuff maybe or other things. However, I was "crafting" my prompt and I could not give a shit less about writing a perfect prompt. I'm typically used to coding what I want but I had to find out how to write it properly: "please don't format it like X". Like I wasn't using AI to write code, it was a service endpoint.
During lunch with the AI team, they keep saying things like "we only have 10 years left at most". I was like, "but if you have AI spit out this code, if something goes wrong ... don't you need us to look into it?" they were like, "yeah but what if it can tell you exactly what the code is doing". I'm like, "but who's going to understand what it's saying ...?" "no, it can explain the type of problem to anyone".
I said, I feel like I'm talking to a libertarian right now. Every response seems to be some solution that doesn't exist.
-
Good enough is the keyword in a lot of things. That's how fast fashion got this big.
Fast fashion (and everything else in the commercial marketplace) needs to start paying for their externalized costs - starting with landfill space, but also the pollution and possibly social supports that are going into the delivery of their products. But, then, people are stupid when it comes to fashion, they'll pay all kinds of premiums if it makes them look like their friends.
-
You're using it wrong. My experience is different from yours. It produces transfer knowledge in the queries I ask it. Not even hundret Googl searches can replace transfer knowledge.
You’re using it wrong.
Your use case is different from mine.
-
We are ants in an anthill. Gears in a machine. Act like it. Stop thinking in classes "rich vs. poor". When you become obsolete it's nobody's fault. This always comes from people who don't understand how this world works.
Progress always comes, finds its way and you can never stop it. Like water in a river. Like entropy. Adapt early instead of desperately forcing against it.
We are ants in an anthill. Gears in a machine. Act like it.
See Woody Allen in AntZ (1998 movie)
Adapt early instead of desperately forcing against it.
There should be a balance. Already today's world is desperately thrashing to "stay ahead of the curve" and putting outrageous investments into blind alleys that group-think believes is the "next big thing."
The reality of automation could be an abundance of what we need, easily available to all, with surplus resources available for all to share and contribute to as they wish - within limits, of course.
It's going to take some desperate forcing to get the resources distributed more widely than they currently are.
-
Are you a poor kid or something? Like what kind of question even is this? Why does it even need to be personal at all? This thread is not about me...
And no. I'm not. I stand to inherit nothing. I'm still a student. I'm not wealthy or anything like that.
-
US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.
In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that "experts are far more positive and enthusiastic about AI than the public" and "far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years" (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).
The public does not share this confidence. Only about 11 percent of the public says that "they are more excited than concerned about the increased use of AI in daily life." They're much more likely (51 percent) to say they're more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.
Butlerian Jihad
-
This is collateral damage of societal progress. This is a phenomenon as old as humanity. You can't fight it. And it has brought us to where we are now. From cavemen to space explorers.
Whoever the mod was that decided to delete my comment is a fool. This guy above is a Nazi apologist.
-
Great for people getting fired or finding that now the jobs they used to have that were middle class are now lower class pay or obsolete. They will be so delighted at the progress despite their salaries and employment benefits and opportunities falling.
This shouldn't come as a surprise. Everyone who's suprised by that is either not educated how economy works or how societal progress works. There are always winners and losers but society makes net-positive progress as a whole.
I have no empathy for people losing their jobs. Even if I lose my job, I accept it. It's just life. Humanity is a really big machine of many gears. Some gears replace others to make the machine run more efficient.
And it's so nice that AI is most concentrated in the hands of billionaires who are oh so generous with improving living standards of the commoners. Wonderful.
This is just a sad excuse I'm hearing all the time. The moment society gets intense and chang is about to happen, a purpetrator needs to be found. But most people don't realize that the people at the top change all the time when the economy changes. They die aswell. It's a dynamic system. And there is no one purpetrator in a dynamic system. The only purpetrator is progress. And progress is like entropy. It always find its way and you cannot stop it. Those who attempt to stop it instead of adapting to it will be crushed.
I have no empathy
for people losing their jobsFTFY
-
Because you write like you think this can't reach you, like you're always going to have food and shelter no matter what happens.
If it reaches me, so be it. That's life. Survival of the fittest. It's my own responsibility to do the best in the environment I live in.
-
AI has it's place, but they need to stop trying to shoehorn it into anything and everything. It's the new "internet of things" cramming of internet connectivity into shit that doesn't need it.
Now your smart fridge can propose unpalatable recipes. Woo fucking hoo.
-
I do as a software engineer. The fad will collapse. Software engineering hiring will increase but the pipeline of new engineers will is dry because no one wants to enter the career with companies hanging ai over everyone's heads. Basic supply and demand says my skillset will become more valuable.
Someone will need to clean up the ai slop. I've already had similar pistons where I was brought into clean up code bases that failed being outsourced.
Ai is simply the next iteration. The problem is always the same business doesn't know what they really want and need and have no ability to assess what has been delivered.
If it walks and quacks like a speculative bubble...
I'm working in an organization that has been exploring LLMs for quite a while now, and at least on the surface, it looks like we might have some use cases where AI could prove useful. But so far, in terms of concrete results, we've gotten bupkis.
And most firms I've encountered don't even have potential uses, they're just doing buzzword engineering. I'd say it's more like the "put blockchain into everything" fad than like outsourcing, which was a bad idea for entirely different reasons.
I'm not saying AI will never have uses. But as it's currently implemented, I've seen no use of it that makes a compelling business case.
-
AI can look at a bajillion examples of code and spit out its own derivative impersonation of that code.
AI isn't good at doing a lot of other things software engineers actually do. It isn't very good at attending meetings, gathering requirements, managing projects, writing documentation for highly-industry-specific products and features that have never existed before, working user tickets, etc.
I work in an environment where we're dealing with high volumes of data, but not like a few meg each for millions of users. More like a few hundred TB fed into multiple pipelines for different kinds of analysis and reduction.
There's a shit-ton of prior art for how to scale up relatively simple web apps to support mass adoption. But there's next to nothing about how do to what we do, because hardly anyone does. So look ma, no training set!
-
Android Messages and Facebook Messenger also pushed in AI as 'something you can chat with'
I'm not here to talk to your fucking chatbot I'm here to talk to my friends and family.
It's easier to up-sell and cross-sell if you're talking to an AI.
-
Yet my libertarian centrist friend INSISTS that AI is great for humanity. I keep telling him the billionaires don't give a fuck about you and he keeps licking boots. How many others are like this??
How many others are like this??
Far too many: more than zero.
-
This is collateral damage of societal progress. This is a phenomenon as old as humanity. You can't fight it. And it has brought us to where we are now. From cavemen to space explorers.
Yeah, yeah, omelettes...eggs... heard it all before.
-
you can't learn from chatbots though. how can you trust that the material is accurate? any time I've asked a chatbot about subject matter that I'm well versed in, they make massive mistakes.
All you're proving is "we can learn badly faster!" or worse, we can spread misinformation faster.
-
Removing the need to do any research is just removing another exercise for the brain. Perfectly crafted AI educational videos might be closer to mental junk food than anything.
-
Whoever the mod was that decided to delete my comment is a fool. This guy above is a Nazi apologist.
What makes you think that?
-