Most Americans think AI won’t improve their lives, survey says
-
You're using it wrong then. These tools are so incredibly useful in software development and scientific work. Chatgpt has saved me countless hours. I'm using it every day. And every colleague I talk to agrees 100%.
Then you must know something the rest of us don't. I've found it marginally useful, but it leads me down useless rabbit holes more than it helps.
-
You're using it wrong then. These tools are so incredibly useful in software development and scientific work. Chatgpt has saved me countless hours. I'm using it every day. And every colleague I talk to agrees 100%.
I'll admit my local model has given me some insight, but in researching more of something, I find the source it likely spat it out from. Now that's helpful, but I feel as though my normal search experience wasn't so polluted with AI written regurgitation of the next result down, I would've found the nice primary source. One example was a code block that computes the inertial moment of each rotational axis of a body. You can try searching for sources and compare what it puts out.
If you have more insight into what tools, especially more i can run local that would improve my impression, i would love to hear. However my opinion remains AI has been a net negative on the internet as a whole (spam, bots, scams, etc) thus far, and certainly has not and probably will not live up to the hype that has been forecast by their CEOs.
Also if you can get access to powerautomate or at least generally know how it works, Copilot can only add nodes seemingly in a general order you specify, but does not connect the dataflow between the nodes (the hardest part) whatsoever. Sometimes it will parse the dataflow connections and return what you were searching for (ie a specific formula used in a large dataflow), but not much of which seems necessary for AI to be doing.
-
US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.
In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that "experts are far more positive and enthusiastic about AI than the public" and "far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years" (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).
The public does not share this confidence. Only about 11 percent of the public says that "they are more excited than concerned about the increased use of AI in daily life." They're much more likely (51 percent) to say they're more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.
-
This is collateral damage of societal progress. This is a phenomenon as old as humanity. You can't fight it. And it has brought us to where we are now. From cavemen to space explorers.
Which are separate things from people's ability to financially support themselves.
People can have smartphones and tech the past didn't have, but be increasingly worse off financially and unable to afford housing.
And you aren't a space explorer.
-
Maybe that's because every time a new AI feature rolls out, the product it's improving gets substantially worse.
Maybe that's because they're using AI to replace people, and the AI does a worse job.
Meanwhile, the people are also out of work.
Lose - Lose.
-
Everyone gains from progress. We've had the same discussion over and over again. When the first sewing machines came along, when the steam engine was invented, when the internet became a thing. Some people will lose their job every time progress is made. But being against progress for that reason is just stupid.
being against progress for that reason is just stupid.
Under the current economic model, being against progress is just self-preservation.
Yes, we could all benefit from AI in some glorious future that doesn't see the AI displaced workers turned into toys for the rich, or forgotten refuse in slums.
-
I'm not sure at this point. The sewing machine was just automated stitching. It is more similar to Photos and landscape painters, only it is worse.
With the creative AI basically most of the visual art skills went to "I'm going to pay 100$ for AI to do this instead 20K and waiting 30 days for the project". Soon doctors, therapists and teachers will look down the barrel. "Why pay for one therapy session for 150 or I can have an AI friend for 20 a month".
In the past you were able to train yourself to use sewing machine or learn how to operate cameras and develop photos. Now I don't even have any idea where it goes.Machine stitching is objectively worse than hand stitching, but... it's good enough and so much more efficient, so that's how things are done now; it has become the norm.
-
AI is changing the landscape of our society. It's only "destroying" society if that's your definition of change.
But fact is, AI makes every aspect where it's being used a lot more productive and easier. And that has to be a good thing in the long run. It always has.
Instead of holding against progress (which is impossible to do for long) you should embrace it and go from there.
AI makes every aspect where it’s being used a lot more productive and easier.
AI makes every aspect where it's being used well a lot more productive and easier.
AI used poorly makes it a lot easier to produce near worthless garbage, which effectively wastes the consumers' time much more than any "productivity gained" on the producer side.
-
And as someone who has extensively set up such systems on their home server.. yeah it's a great google home replacement, nothing more. It's beyond useless on Powerautomate which I use (unwillingly) at my job. Copilot can't even parse and match items from two lists. Despite my company trying its damn best to encourage "our own" (chatgpt enterprise) AI, nobody i have talked with has found a use.
AI search is occasionally faster and easier than slogging through the source material that the AI was trained on. The source material for programming is pretty weak itself, so there's an issue.
I think AI has a lot of untapped potential, and it's going to be a VERY long time before people who don't know how to ask it for what they want will be able to communicate what they want to an AI.
A lot of programming today gets value from the programmers guessing (correctly) what their employers really want, while ignoring the asks that are impractical / counterproductive.
-
You're using it wrong then. These tools are so incredibly useful in software development and scientific work. Chatgpt has saved me countless hours. I'm using it every day. And every colleague I talk to agrees 100%.
If you were too lazy to read three Google search results before, yes... AI is amazing in that it shows you something you ask for without making you dig as deep as you used to have to.
I rarely get a result from ChatGPT that I couldn't have skimmed for myself in about twice to five times the time.
I frequently get results from ChatGPT that are just as useless as what I find reading through my first three Google results.
-
I'll admit my local model has given me some insight, but in researching more of something, I find the source it likely spat it out from. Now that's helpful, but I feel as though my normal search experience wasn't so polluted with AI written regurgitation of the next result down, I would've found the nice primary source. One example was a code block that computes the inertial moment of each rotational axis of a body. You can try searching for sources and compare what it puts out.
If you have more insight into what tools, especially more i can run local that would improve my impression, i would love to hear. However my opinion remains AI has been a net negative on the internet as a whole (spam, bots, scams, etc) thus far, and certainly has not and probably will not live up to the hype that has been forecast by their CEOs.
Also if you can get access to powerautomate or at least generally know how it works, Copilot can only add nodes seemingly in a general order you specify, but does not connect the dataflow between the nodes (the hardest part) whatsoever. Sometimes it will parse the dataflow connections and return what you were searching for (ie a specific formula used in a large dataflow), but not much of which seems necessary for AI to be doing.
I think a lot depends on where "on the curve" you are working, too. If you're out past the bleeding edge doing new stuff, ChatGPT is (obviously) going to be pretty useless. But, if you just want a particular method or tool that has been done (and published) many times before, yeah, it can help you find that pretty quickly.
I remember doing my Masters' thesis in 1989, it took me months of research and journals delivered via inter-library loan before I found mention of other projects doing essentially what I was doing. With today's research landscape that multi-month delay should be compressed to a couple of hours, frequently less.
If you haven't read Melancholy Elephants, it's a great reference point for what we're getting into with modern access to everything:
-
AI is changing the landscape of our society. It's only "destroying" society if that's your definition of change.
But fact is, AI makes every aspect where it's being used a lot more productive and easier. And that has to be a good thing in the long run. It always has.
Instead of holding against progress (which is impossible to do for long) you should embrace it and go from there.
I use AI for programming questions, because it's easier than digging 1h through official docs (if they exists) and frustrating trial and error.
However quite often the ai answers are wrong by inserting nonsense code, using for instead of foreach or trying to access variables that are not always set.
Yes it helps, but it's usually only 60% right.
-
AI is changing the landscape of our society. It's only "destroying" society if that's your definition of change.
But fact is, AI makes every aspect where it's being used a lot more productive and easier. And that has to be a good thing in the long run. It always has.
Instead of holding against progress (which is impossible to do for long) you should embrace it and go from there.
-
Then you must know something the rest of us don't. I've found it marginally useful, but it leads me down useless rabbit holes more than it helps.
I'm about 50/50 between helpful results and "nope, that's not it, either" out of the various AI tools I have used.
I think it very much depends on what you're trying to do with it. As a student, or fresh-grad employee in a typical field, it's probably much more helpful because you are working well trod ground.
As a PhD or other leading edge researcher, possibly in a field without a lot of publications, you're screwed as far as the really inventive stuff goes, but... if you've read "Surely you're joking, Mr. Feynman!" there's a bit in there where the Manhattan project researchers (definitely breaking new ground at the time) needed basic stuff, like gears, for what they were doing. The gear catalogs of the day told them a lot about what they needed to know - per the text: if you're making something that needs gears, pick your gears from the catalog but just avoid the largest and smallest of each family/table - they are there because the next size up or down is getting into some kind of problems engineering wise, so just stay away from the edges and you should have much more reliable results. That's an engineer's shortcut for how to use thousands, maybe millions, of man-years of prior gear research, development and engineering and get the desired results just by referencing a catalog.
-
I considered this, and I think it depends mostly on ownership and means of production.
Even in the scenario where everyone has access to superhuman models, that would still lead to labor being devalued. When combined with robotics and other forms of automation, the capitalist class will no longer need workers, and large parts of the economy would disappear. That would create a two tiered society, where those with resources become incredibly wealthy and powerful, and those without have no ability to do much of anything, and would likely revert to an agricultural society (assuming access to land), or just propped up with something like UBI.
Basically, I don't see how it would lead to any form of communism on its own. It would still require a revolution. That being said, I do think AGI could absolutely be a pillar of a post capitalist utopia, I just don't think it will do much to get us there.
-
Maybe that's because every time a new AI feature rolls out, the product it's improving gets substantially worse.
-
Yet my libertarian centrist friend INSISTS that AI is great for humanity. I keep telling him the billionaires don't give a fuck about you and he keeps licking boots. How many others are like this??
-
If it was marketed and used for what it's actually good at this wouldn't be an issue. We shouldn't be using it to replace artists, writers, musicians, teachers, programmers, and actors. It should be used as a tool to make those people's jobs easier and achieve better results. I understand its uses and that it's not a useless technology. The problem is that capitalism and greedy CEOs are ruining the technology by trying to replace everyone but themselves so they can maximize profits.
We shouldn’t be using it to replace artists, writers, musicians, teachers, programmers, and actors.
That's an opinion - one I share in the vast majority of cases, but there's a lot of art work that AI really can do "good enough" for the purpose that we really should be freeing up the human artists to do the more creative work. Writers, if AI is turning out acceptable copy (which in my experience is: almost never so far, but hypothetically - eventually) why use human writers to do that? And so on down the line.
The problem is that capitalism and greedy CEOs are hyping the technology as the next big thing, looking for a big boost in their share price this quarter, not being realistic about how long it's really going to take to achieve the things they're hyping.
"Artificial Intelligence" has been 5-10 years off for 40 years. We have seen amazing progress in the past 5 years as compared to the previous 35, but it's likely to be 35 more before half the things that are being touted as "here today" are actually working at a positive value ROI. There are going to be more than a few more examples like the "smart grocery store" where you just put things in your basket and walk out and you get charged "appropriately" supposedly based on AI surveillance, but really mostly powered by low cost labor somewhere else on the planet.
-
This is exactly the result. No matter how advanced AI gets, unless the singularity is realized, we will be no closer to some kind of 8-hour workweek utopia. These AI Silicon Valley fanatics are the same ones saying that basic social welfare programs are naive and un-implementable - so why would they suddenly change their entire perspective on life?
we will be no closer to some kind of 8-hour workweek utopia.
If you haven't read this, it's short and worth the time. The short work week utopia is one of two possible outcomes imagined: https://marshallbrain.com/manna1
-
Maybe it’s because the American public are shortsighted idiots who don’t understand the concepts like future outcomes are based on present decisions.