Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills
-
Totally agree with you! I'm in a different field but I see it in the same light. Let it get you to 80-90% of whatever that task is and then refine from there. It saves you time to add on all the extra cool shit that that 90% of time would've taken into. So many people assume you have to use at 100% face value. Just take what it gives you as a jumping off point.
-
I’d agree that anybody who just takes the first answer offered them by any means as fact would have the same results as this study.
-
Damn. Guess we oughtta stop using AI like we do drugs/pron/<addictive-substance>
-
Unlike those others, Microsoft could do something about this considering they are literally part of the problem.
And yet I doubt Copilot will be going anywhere.
-
AI makes it worse though. People will read a website they find on Google that someone wrote and say, "well that's just what some guy thinks." But when an AI says it, those same people think it's authoritative. And now that they can talk, including with believable simulations of emotional vocal inflections, it's going to get far, far worse.
Humans evolved to process auditory communications. We did not evolve to be able to read. So we tend to trust what we hear a lot more than we trust what we read. And companies like OpenAI are taking full advantage of that.
-
People generally don't learn from an unreliable teacher.
-
Please show me the peer-reviewed scientific journal that requires a minimum number of words per article.
Seems like these journals don't have a word count minimum: https://paperpile.com/blog/shortest-papers/
-
All tools can be abused tbh. Before chatgpt was a thing, we called those programmers the StackOverflow kids, copy the first answer and hope for the best memes.
After searching for a solution a bit and not finding jack shit, asking a llm about some specific API thing or simple implementation example so you can extrapolate it into your complex code and confirm what it does reading the docs, both enriches the mind and you learn new techniques for the future.
Good programmers do what I described, bad programmers copy and run without reading. It's just like SO kids.
-
Literally everyone learns from unreliable teachers, the question is just how reliable.
-
They in fact often have word and page limits and most journal articles I've been a part of has had a period at the end of cutting and trimming in order to fit into those limitds.
-
That makes sense considering a journal can only be so many pages long.
-
I once asked ChatGPT who I was and hallucinated this weird thing about me being a motivational speaker for businesses. I have a very unusual name and there is only one other person in the U.S. (now the only person in the U.S. since I just emigrated) with my name. Neither of us are motivational speakers or ever were.
Then I asked it again and it said it had no idea who I was. Which is kind of insulting to my namesake since he won an Emmy award.
-
That snark doesnt help anyone.
Imagine the AI was 100% perfect and gave the correct answer every time, people using it would have a significantly reduced diversity of results as they would always be using the same tool to get the correct same answer.
People using an ai get a smaller diversity of results is neither good nor bad its just the way things are, the same way as people using the same pack of pens use a smaller variety of colours than those who are using whatever pens they have.
-
researchers at Microsoft and Carnegie Mellon University found that the more humans lean on AI tools to complete their tasks, the less critical thinking they do, making it more difficult to call upon the skills when they are needed.
It's one thing to try to do and then ask for help (as you did), it's another to just ask it to "do x" without thought or effort which is what the study is about.
-
Jokes on you. Volume is always off on my phone, so I read the ai.
Also, I don't actually ever use the ai.
-
I am not worried about people here on Lemmy. I am worried about people who don't know much about computers at all. i.e. the majority of the public. They think computers are magic. This will make it far worse.
-
First off the AI isn’t correct 100% of the time, and it never will be.
Secondly, you as well are stating in so many more words that people stop thinking critically about its output. They accept it.
That is a lack of critical thinking on the part of the AI users, as well as yourself and the original poster.
Like, I don’t understand the argument you all are making here - am I going fucking crazy? “Bro it’s not that they don’t think critically it’s just that they accept whatever they’re given” which is the fucking definition of a lack of critical thinking.
-
Training those AIs was expensive. It swallowed very large sums of VC's cash, and they will make it back.
Remember, their money is way more important than your life.
-
I think specifically Lemmy and just the in general anti corpo mistrust drives the majority of the negativity towards AI. Everyone is cash/land grabbing towards anything that sticks. Trying to shove their product down everyone's throat.
People don't like that behavior and thus shun it. Understandable. However don't let that guide your entire logical thinking as a whole, it seems to cloud most people entirely to the point they can't fathom an alternative perspective.
I think the vast majority of tools/software originate from a source of good but then get transformed into bad actors because of monetization. Eventually though and trends over time prove this, things become open source or free and the real good period arrives after the refinement and profit period. It's very parasitic even to some degree.