Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills
-
Garbage in, Garbage out. Ingesting all that internet blather didn't make the ai smarter by much if anything.
-
You can either use AI to just vomit dubious information at you or you can use it as a tool to do stuff. The more specific the task, the better LLMs work. When I use LLMs for highly specific coding tasks that I couldn't do otherwise (I'm not a [good] coder), it does not make me worse at critical thinking.
I actually understand programming much better because of LLMs. I have to debug their code, do research so I know how to prompt it best to get what I want, do research into programming and software design principles, etc.
-
I grew up as a kid without the internet. Google on your phone and youtube kills your critical thinking skills.
-
I agree with the output for legitimate reasons but it's not black and white wrong or right. I think it's wildly misjudged and while there plenty of valid reasons behind that I still think there is much to be had for what AI in general can do for us on a whole and individual basis.
Today I had it analyze 8 medical documents, told it to provide analysis, cross reference its output with scientific studies including sources, and other lengthy queries. These documents are dealing with bacterial colonies and multiple GI and bodily systems on a per document basis in great length. Some of the most advanced testing science offers.
It was able to not only provide me with accurate numbers that I fact checked from my documents side by side but explain methods to counter multi faceted systemic issues that matched multiple specialty Dr.s. Which is fairly impressive given to see a Dr takes 3 to 9 months or longer, who may or may not give a shit, over worked and understaffed, pick your reasoning.
While I tried having it scan from multiple fresh blank chat tabs and even different computers to really test it out for testing purposes.
Overall some of the numbers were off say 3 or 4 individual colony counts across all 8 documents. I corrected the values, told it that it was incorrect and to reasses giving it more time and ensuring accuracy, supplied a bit more context about how to understand the tables and I mean broad context such as page 6 shows gene expression use this as reference to find all underlying issues as it isnt a mind reader. It managed to fairly accurately identify the dysbiosis and other systemic issues with reasonable accuracy on par with physicians I have worked with. Dealing with antibiotic gene resistant analysis it was able to find multiple approaches to therapies to fight antibiotic gene resistant bacteria in a fraction of the time it would take for a human to study.
I would not bet my life solely on the responses as it's far from perfected and as always with any info it should be cross referenced and fact checked through various sources. But those who speak such ill towards the usage while there is valid points I find unfounded. My 2 cents.
-
Copying isn't the same as using your brain to form logical conclusions. Instead your taking someone else's wild interpretation, research, study, and blindly copying it as fact. That lowers critical thinking because your not thinking at all. Bad information is always bad no matter how far it spreads. Incomplete info is no different.
-
Like any tool, it's only as good as the person wielding it.
-
I use a bespoke model to spin up pop quizzes, and I use NovelAI for fun.
Legit, being able to say "I want these questions. But... not these..." and get them back in a moment's notice really does let me say "FUCK it. Pop quiz. Let's go, class." And be ready with brand new questions on the board that I didn't have before I said that sentence. NAI is a good way to turn writing into an interactive DnD session, and is a great way to force a ram through writer's block, with a "yeah, and—!" machine.
-
I've spent all week working with DeepSeek to write DnD campaigns based on artifacts from the game Dark Age of Camelot. This week was just on one artifact.
AI/LLMs are great for bouncing ideas off of and using it to tweak things. I gave it a prompt on what I was looking for (the guardian of dusk steps out and says: "the dawn brings the warmth of the sun, and awakens the world. So does your trial begin." He is a druid and the party is a party of 5 level 1 players. Give me a stat block and XP amount for this situation.
I had it help me fine tune puzzle and traps. Fine tune the story behind everything and fine tune the artifact at the end (it levels up 5 levels as the player does specific things to gain leveling points for just the item).
I also ran a short campaign with it as the DM. It did a great job at acting out the different NPCs that it created and adjusting to both the tone and situation of the campaign. It adjusted pretty good to what I did as well.
-
-
Idk man. I just used it the other day for recalling some regex syntax and it was a bit helpful. However, if you use it to help you generate the regex prompt, it won't do that successfully. However, it can break down the regex and explain it to you.
Ofc you all can say "just read the damn manual", sure I could do that too, but asking an generative a.i to explain a script can also be as effective.
-
Hey, just letting you know getting the answers you want after getting a whole lot of answers you dont want is pretty much how everyone learns.
-
“Deepsink” lmao sounds like some sink cleaner brand
-
yes, exactly. You lose your critical thinking skills
-
Totally agree with you! I'm in a different field but I see it in the same light. Let it get you to 80-90% of whatever that task is and then refine from there. It saves you time to add on all the extra cool shit that that 90% of time would've taken into. So many people assume you have to use at 100% face value. Just take what it gives you as a jumping off point.
-
I’d agree that anybody who just takes the first answer offered them by any means as fact would have the same results as this study.
-
Damn. Guess we oughtta stop using AI like we do drugs/pron/<addictive-substance>
-
Unlike those others, Microsoft could do something about this considering they are literally part of the problem.
And yet I doubt Copilot will be going anywhere.
-
AI makes it worse though. People will read a website they find on Google that someone wrote and say, "well that's just what some guy thinks." But when an AI says it, those same people think it's authoritative. And now that they can talk, including with believable simulations of emotional vocal inflections, it's going to get far, far worse.
Humans evolved to process auditory communications. We did not evolve to be able to read. So we tend to trust what we hear a lot more than we trust what we read. And companies like OpenAI are taking full advantage of that.
-
People generally don't learn from an unreliable teacher.
-
Please show me the peer-reviewed scientific journal that requires a minimum number of words per article.
Seems like these journals don't have a word count minimum: https://paperpile.com/blog/shortest-papers/