Most Americans think AI won’t improve their lives, survey says
-
AI is changing the landscape of our society. It's only "destroying" society if that's your definition of change.
But fact is, AI makes every aspect where it's being used a lot more productive and easier. And that has to be a good thing in the long run. It always has.
Instead of holding against progress (which is impossible to do for long) you should embrace it and go from there.
AI makes every aspect where it’s being used a lot more productive and easier.
AI makes every aspect where it's being used well a lot more productive and easier.
AI used poorly makes it a lot easier to produce near worthless garbage, which effectively wastes the consumers' time much more than any "productivity gained" on the producer side.
-
And as someone who has extensively set up such systems on their home server.. yeah it's a great google home replacement, nothing more. It's beyond useless on Powerautomate which I use (unwillingly) at my job. Copilot can't even parse and match items from two lists. Despite my company trying its damn best to encourage "our own" (chatgpt enterprise) AI, nobody i have talked with has found a use.
AI search is occasionally faster and easier than slogging through the source material that the AI was trained on. The source material for programming is pretty weak itself, so there's an issue.
I think AI has a lot of untapped potential, and it's going to be a VERY long time before people who don't know how to ask it for what they want will be able to communicate what they want to an AI.
A lot of programming today gets value from the programmers guessing (correctly) what their employers really want, while ignoring the asks that are impractical / counterproductive.
-
You're using it wrong then. These tools are so incredibly useful in software development and scientific work. Chatgpt has saved me countless hours. I'm using it every day. And every colleague I talk to agrees 100%.
If you were too lazy to read three Google search results before, yes... AI is amazing in that it shows you something you ask for without making you dig as deep as you used to have to.
I rarely get a result from ChatGPT that I couldn't have skimmed for myself in about twice to five times the time.
I frequently get results from ChatGPT that are just as useless as what I find reading through my first three Google results.
-
I'll admit my local model has given me some insight, but in researching more of something, I find the source it likely spat it out from. Now that's helpful, but I feel as though my normal search experience wasn't so polluted with AI written regurgitation of the next result down, I would've found the nice primary source. One example was a code block that computes the inertial moment of each rotational axis of a body. You can try searching for sources and compare what it puts out.
If you have more insight into what tools, especially more i can run local that would improve my impression, i would love to hear. However my opinion remains AI has been a net negative on the internet as a whole (spam, bots, scams, etc) thus far, and certainly has not and probably will not live up to the hype that has been forecast by their CEOs.
Also if you can get access to powerautomate or at least generally know how it works, Copilot can only add nodes seemingly in a general order you specify, but does not connect the dataflow between the nodes (the hardest part) whatsoever. Sometimes it will parse the dataflow connections and return what you were searching for (ie a specific formula used in a large dataflow), but not much of which seems necessary for AI to be doing.
I think a lot depends on where "on the curve" you are working, too. If you're out past the bleeding edge doing new stuff, ChatGPT is (obviously) going to be pretty useless. But, if you just want a particular method or tool that has been done (and published) many times before, yeah, it can help you find that pretty quickly.
I remember doing my Masters' thesis in 1989, it took me months of research and journals delivered via inter-library loan before I found mention of other projects doing essentially what I was doing. With today's research landscape that multi-month delay should be compressed to a couple of hours, frequently less.
If you haven't read Melancholy Elephants, it's a great reference point for what we're getting into with modern access to everything:
-
AI is changing the landscape of our society. It's only "destroying" society if that's your definition of change.
But fact is, AI makes every aspect where it's being used a lot more productive and easier. And that has to be a good thing in the long run. It always has.
Instead of holding against progress (which is impossible to do for long) you should embrace it and go from there.
I use AI for programming questions, because it's easier than digging 1h through official docs (if they exists) and frustrating trial and error.
However quite often the ai answers are wrong by inserting nonsense code, using for instead of foreach or trying to access variables that are not always set.
Yes it helps, but it's usually only 60% right.
-
AI is changing the landscape of our society. It's only "destroying" society if that's your definition of change.
But fact is, AI makes every aspect where it's being used a lot more productive and easier. And that has to be a good thing in the long run. It always has.
Instead of holding against progress (which is impossible to do for long) you should embrace it and go from there.
-
Then you must know something the rest of us don't. I've found it marginally useful, but it leads me down useless rabbit holes more than it helps.
I'm about 50/50 between helpful results and "nope, that's not it, either" out of the various AI tools I have used.
I think it very much depends on what you're trying to do with it. As a student, or fresh-grad employee in a typical field, it's probably much more helpful because you are working well trod ground.
As a PhD or other leading edge researcher, possibly in a field without a lot of publications, you're screwed as far as the really inventive stuff goes, but... if you've read "Surely you're joking, Mr. Feynman!" there's a bit in there where the Manhattan project researchers (definitely breaking new ground at the time) needed basic stuff, like gears, for what they were doing. The gear catalogs of the day told them a lot about what they needed to know - per the text: if you're making something that needs gears, pick your gears from the catalog but just avoid the largest and smallest of each family/table - they are there because the next size up or down is getting into some kind of problems engineering wise, so just stay away from the edges and you should have much more reliable results. That's an engineer's shortcut for how to use thousands, maybe millions, of man-years of prior gear research, development and engineering and get the desired results just by referencing a catalog.
-
I considered this, and I think it depends mostly on ownership and means of production.
Even in the scenario where everyone has access to superhuman models, that would still lead to labor being devalued. When combined with robotics and other forms of automation, the capitalist class will no longer need workers, and large parts of the economy would disappear. That would create a two tiered society, where those with resources become incredibly wealthy and powerful, and those without have no ability to do much of anything, and would likely revert to an agricultural society (assuming access to land), or just propped up with something like UBI.
Basically, I don't see how it would lead to any form of communism on its own. It would still require a revolution. That being said, I do think AGI could absolutely be a pillar of a post capitalist utopia, I just don't think it will do much to get us there.
-
Maybe that's because every time a new AI feature rolls out, the product it's improving gets substantially worse.
-
Yet my libertarian centrist friend INSISTS that AI is great for humanity. I keep telling him the billionaires don't give a fuck about you and he keeps licking boots. How many others are like this??
-
If it was marketed and used for what it's actually good at this wouldn't be an issue. We shouldn't be using it to replace artists, writers, musicians, teachers, programmers, and actors. It should be used as a tool to make those people's jobs easier and achieve better results. I understand its uses and that it's not a useless technology. The problem is that capitalism and greedy CEOs are ruining the technology by trying to replace everyone but themselves so they can maximize profits.
We shouldn’t be using it to replace artists, writers, musicians, teachers, programmers, and actors.
That's an opinion - one I share in the vast majority of cases, but there's a lot of art work that AI really can do "good enough" for the purpose that we really should be freeing up the human artists to do the more creative work. Writers, if AI is turning out acceptable copy (which in my experience is: almost never so far, but hypothetically - eventually) why use human writers to do that? And so on down the line.
The problem is that capitalism and greedy CEOs are hyping the technology as the next big thing, looking for a big boost in their share price this quarter, not being realistic about how long it's really going to take to achieve the things they're hyping.
"Artificial Intelligence" has been 5-10 years off for 40 years. We have seen amazing progress in the past 5 years as compared to the previous 35, but it's likely to be 35 more before half the things that are being touted as "here today" are actually working at a positive value ROI. There are going to be more than a few more examples like the "smart grocery store" where you just put things in your basket and walk out and you get charged "appropriately" supposedly based on AI surveillance, but really mostly powered by low cost labor somewhere else on the planet.
-
This is exactly the result. No matter how advanced AI gets, unless the singularity is realized, we will be no closer to some kind of 8-hour workweek utopia. These AI Silicon Valley fanatics are the same ones saying that basic social welfare programs are naive and un-implementable - so why would they suddenly change their entire perspective on life?
we will be no closer to some kind of 8-hour workweek utopia.
If you haven't read this, it's short and worth the time. The short work week utopia is one of two possible outcomes imagined: https://marshallbrain.com/manna1
-
Maybe it’s because the American public are shortsighted idiots who don’t understand the concepts like future outcomes are based on present decisions.
-
This vision of the AI making everything easier always leaves out the part where nobody has a job as a result.
Sure you can relax on a beach, you have all the time in the world now that you are unemployed. The disconnect is mind boggling.
Universal Base Income - it's either that or just kill all the un-necessary poor people.
-
Except, no employer will allow you to use your own AI model. Just like you can't bring your own work equipment (which in many regards even is a good thing) companies will force you to use their specific type of AI for your work.
No big employer... there are plenty of smaller companies who are open to do whatever works.
-
Mayne pedantic, but:
Everyone seems to think CEOs are the problem. They are not. They report to and get broad instruction from the board. The board can fire the CEO. If you got rid of a CEO, the board will just hire a replacement.
CEOs are the figurehead, they are virtually bound by law to act sociopathically - in the interests of their shareholders over everyone else. Carl Icahn also has an interesting take on a particularly upsetting emergent property of our system of CEO selection: https://dealbreaker.com/2007/10/icahn-explains-why-are-there-so-many-idiots-running-shit
-
And if you get rid of the board, the shareholders will appointment a new one. If you somehow get rid of all the shareholders, like-minded people will slot themselves into those positions.
The problems are systemic, not individual.
Shareholders only care about the value of their shares increasing. It's a productive arrangement, up to a point, but we've gotten too good at ignoring and externalizing the human, environmental, and long term costs in pursuit of ever increasing shareholder value.
-
This is like asking tobacco farmers what their thoughts are on smoking.
Al Gore's family thought that the political tide was turning against it, so they gave up tobacco farming in the late 1980s - and focused on politics.
-
See also; the cotton gin.
The cotton gin has been used as an argument for why slavery finally became unacceptable. Until then society "needed" slaves to do the work, but with the cotton gin and other automations the costs of slavery started becoming higher than the value.
-
Right?! It's literally just a messenger, honestly, all I expect from it is that it's an easy and reliable way of sending messages to my contacts. Anything else is questionable.