Most Americans think AI won’t improve their lives, survey says
-
US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.
In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that "experts are far more positive and enthusiastic about AI than the public" and "far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years" (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).
The public does not share this confidence. Only about 11 percent of the public says that "they are more excited than concerned about the increased use of AI in daily life." They're much more likely (51 percent) to say they're more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.
AI is mainly a tool for the powerful to oppress the lesser blessed. I mean cutting actual professionals out of the process to let CEOs wildest dreams go unchecked has devastating consequences already if rumors are to believed that some kids using ChatGPT cooked up those massive tariffs that have already erased trillions.
-
US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.
In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that "experts are far more positive and enthusiastic about AI than the public" and "far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years" (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).
The public does not share this confidence. Only about 11 percent of the public says that "they are more excited than concerned about the increased use of AI in daily life." They're much more likely (51 percent) to say they're more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.
Lol they get a capable chatbot that blows everything out of the water and suddenly they are like "yeah, this will be the last big thing"
-
If it was marketed and used for what it's actually good at this wouldn't be an issue. We shouldn't be using it to replace artists, writers, musicians, teachers, programmers, and actors. It should be used as a tool to make those people's jobs easier and achieve better results. I understand its uses and that it's not a useless technology. The problem is that capitalism and greedy CEOs are ruining the technology by trying to replace everyone but themselves so they can maximize profits.
The natural outcome of making jobs easier in a profit driven business model is to either add more work or reduce the number of workers.
-
AI does improve our lives. Saying it doesn't is borderline delusional.
Can you give some examples that I unknowingly use and improves my life?
-
If it was marketed and used for what it's actually good at this wouldn't be an issue. We shouldn't be using it to replace artists, writers, musicians, teachers, programmers, and actors. It should be used as a tool to make those people's jobs easier and achieve better results. I understand its uses and that it's not a useless technology. The problem is that capitalism and greedy CEOs are ruining the technology by trying to replace everyone but themselves so they can maximize profits.
Mayne pedantic, but:
Everyone seems to think CEOs are the problem. They are not. They report to and get broad instruction from the board. The board can fire the CEO. If you got rid of a CEO, the board will just hire a replacement.
-
AI does improve our lives. Saying it doesn't is borderline delusional.
Every technology shift creates winners and losers.
There's already documented harm from algorithms making callous biased decisions that ruin people's lives - an example is automated insurance claim rejections.
We know that AI is going to bring algorithmic decisions into many new places where it can do harm. AI adoption is currently on track to get to those places well before the most important harm reduction solutions are mature.
We should take care that we do not gaslight people who will be harmed by this trend, by telling them they are better off.
-
US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.
In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that "experts are far more positive and enthusiastic about AI than the public" and "far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years" (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).
The public does not share this confidence. Only about 11 percent of the public says that "they are more excited than concerned about the increased use of AI in daily life." They're much more likely (51 percent) to say they're more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.
-
The natural outcome of making jobs easier in a profit driven business model is to either add more work or reduce the number of workers.
Yes, but when the price is low enough (honestly free in a lot of cases) for a single person to use it, it also makes people less reliant on the services of big corporations.
For example, today’s AI can reliably make decent marketing websites, even when run by nontechnical people. Definitely in the “good enough” zone. So now small businesses don’t have to pay Webflow those crazy rates.
And if you run the AI locally, you can also be free of paying a subscription to a big AI company.
-
AI is mainly a tool for the powerful to oppress the lesser blessed. I mean cutting actual professionals out of the process to let CEOs wildest dreams go unchecked has devastating consequences already if rumors are to believed that some kids using ChatGPT cooked up those massive tariffs that have already erased trillions.
I would agree with that if the cost of the tool was prohibitively expensive for the average person, but it’s really not.
-
Depends on what we mean by "AI".
Machine learning? It's already had a huge effect, drug discovery alone is transformative.
LLMs and the like? Yeah I'm not sure how positive these are. I don't think they've actually been all that impactful so far.
Once we have true machine intelligence, then we have the potential for great improvements in daily life and society, but that entirely depends on how it will be used.
It could be a bridge to post-scarcity, but under capitalism it's much more likely it will erode the working class further and exacerbate inequality.
As long as open source AI keeps up (it has so far) it’ll enable technocommunism as much as it enables rampant capitalism.
-
The natural outcome of making jobs easier in a profit driven business model is to either add more work or reduce the number of workers.
This is exactly the result. No matter how advanced AI gets, unless the singularity is realized, we will be no closer to some kind of 8-hour workweek utopia. These AI Silicon Valley fanatics are the same ones saying that basic social welfare programs are naive and un-implementable - so why would they suddenly change their entire perspective on life?
-
Yes, but when the price is low enough (honestly free in a lot of cases) for a single person to use it, it also makes people less reliant on the services of big corporations.
For example, today’s AI can reliably make decent marketing websites, even when run by nontechnical people. Definitely in the “good enough” zone. So now small businesses don’t have to pay Webflow those crazy rates.
And if you run the AI locally, you can also be free of paying a subscription to a big AI company.
Except, no employer will allow you to use your own AI model. Just like you can't bring your own work equipment (which in many regards even is a good thing) companies will force you to use their specific type of AI for your work.
-
As long as open source AI keeps up (it has so far) it’ll enable technocommunism as much as it enables rampant capitalism.
I considered this, and I think it depends mostly on ownership and means of production.
Even in the scenario where everyone has access to superhuman models, that would still lead to labor being devalued. When combined with robotics and other forms of automation, the capitalist class will no longer need workers, and large parts of the economy would disappear. That would create a two tiered society, where those with resources become incredibly wealthy and powerful, and those without have no ability to do much of anything, and would likely revert to an agricultural society (assuming access to land), or just propped up with something like UBI.
Basically, I don't see how it would lead to any form of communism on its own. It would still require a revolution. That being said, I do think AGI could absolutely be a pillar of a post capitalist utopia, I just don't think it will do much to get us there.
-
Can you give some examples that I unknowingly use and improves my life?
Translations apps would be the main one for LLM tech, LLMs largely came out of google's research into machine translation.
-
Except, no employer will allow you to use your own AI model. Just like you can't bring your own work equipment (which in many regards even is a good thing) companies will force you to use their specific type of AI for your work.
Presumably “small business” means self-employed or other employee-owned company. Not the bureaucratic nightmare that most companies are.
-
AI is mainly a tool for the powerful to oppress the lesser blessed. I mean cutting actual professionals out of the process to let CEOs wildest dreams go unchecked has devastating consequences already if rumors are to believed that some kids using ChatGPT cooked up those massive tariffs that have already erased trillions.
-
Mayne pedantic, but:
Everyone seems to think CEOs are the problem. They are not. They report to and get broad instruction from the board. The board can fire the CEO. If you got rid of a CEO, the board will just hire a replacement.
And if you get rid of the board, the shareholders will appointment a new one. If you somehow get rid of all the shareholders, like-minded people will slot themselves into those positions.
The problems are systemic, not individual.
-
US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.
In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that "experts are far more positive and enthusiastic about AI than the public" and "far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years" (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).
The public does not share this confidence. Only about 11 percent of the public says that "they are more excited than concerned about the increased use of AI in daily life." They're much more likely (51 percent) to say they're more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.
It’s not really a matter of opinion at this point. What is available has little if any benefit to anyone who isn’t trying to justify rock bottom wages or sweeping layoffs. Most Americans, and most people on earth, stand to lose far more than they gain from LLMs.
-
It’s not really a matter of opinion at this point. What is available has little if any benefit to anyone who isn’t trying to justify rock bottom wages or sweeping layoffs. Most Americans, and most people on earth, stand to lose far more than they gain from LLMs.
Everyone gains from progress. We've had the same discussion over and over again. When the first sewing machines came along, when the steam engine was invented, when the internet became a thing. Some people will lose their job every time progress is made. But being against progress for that reason is just stupid.
-
AI is mainly a tool for the powerful to oppress the lesser blessed. I mean cutting actual professionals out of the process to let CEOs wildest dreams go unchecked has devastating consequences already if rumors are to believed that some kids using ChatGPT cooked up those massive tariffs that have already erased trillions.
Life isn't always Occam's Razor.