Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills
-
yes, exactly. You lose your critical thinking skills
-
Totally agree with you! I'm in a different field but I see it in the same light. Let it get you to 80-90% of whatever that task is and then refine from there. It saves you time to add on all the extra cool shit that that 90% of time would've taken into. So many people assume you have to use at 100% face value. Just take what it gives you as a jumping off point.
-
I’d agree that anybody who just takes the first answer offered them by any means as fact would have the same results as this study.
-
Damn. Guess we oughtta stop using AI like we do drugs/pron/<addictive-substance>
-
Unlike those others, Microsoft could do something about this considering they are literally part of the problem.
And yet I doubt Copilot will be going anywhere.
-
AI makes it worse though. People will read a website they find on Google that someone wrote and say, "well that's just what some guy thinks." But when an AI says it, those same people think it's authoritative. And now that they can talk, including with believable simulations of emotional vocal inflections, it's going to get far, far worse.
Humans evolved to process auditory communications. We did not evolve to be able to read. So we tend to trust what we hear a lot more than we trust what we read. And companies like OpenAI are taking full advantage of that.
-
People generally don't learn from an unreliable teacher.
-
Please show me the peer-reviewed scientific journal that requires a minimum number of words per article.
Seems like these journals don't have a word count minimum: https://paperpile.com/blog/shortest-papers/
-
All tools can be abused tbh. Before chatgpt was a thing, we called those programmers the StackOverflow kids, copy the first answer and hope for the best memes.
After searching for a solution a bit and not finding jack shit, asking a llm about some specific API thing or simple implementation example so you can extrapolate it into your complex code and confirm what it does reading the docs, both enriches the mind and you learn new techniques for the future.
Good programmers do what I described, bad programmers copy and run without reading. It's just like SO kids.
-
Literally everyone learns from unreliable teachers, the question is just how reliable.
-
They in fact often have word and page limits and most journal articles I've been a part of has had a period at the end of cutting and trimming in order to fit into those limitds.
-
That makes sense considering a journal can only be so many pages long.
-
I once asked ChatGPT who I was and hallucinated this weird thing about me being a motivational speaker for businesses. I have a very unusual name and there is only one other person in the U.S. (now the only person in the U.S. since I just emigrated) with my name. Neither of us are motivational speakers or ever were.
Then I asked it again and it said it had no idea who I was. Which is kind of insulting to my namesake since he won an Emmy award.
-
That snark doesnt help anyone.
Imagine the AI was 100% perfect and gave the correct answer every time, people using it would have a significantly reduced diversity of results as they would always be using the same tool to get the correct same answer.
People using an ai get a smaller diversity of results is neither good nor bad its just the way things are, the same way as people using the same pack of pens use a smaller variety of colours than those who are using whatever pens they have.
-
-
-
-
-
Training those AIs was expensive. It swallowed very large sums of VC's cash, and they will make it back.
Remember, their money is way more important than your life.