Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills
-
When was the last time you did math without a calculator?
Calculators also don’t think critically.
-
Calculators also don’t think critically.
Damn. I wonder where all the calculus identities and mathematical puzzle solving abilities in my head disappeared to then. Surely not into the void that is Wolfram Mathematica. Surely not...
-
This post did not contain any content.
Really? I just asked ChatGPT and this is what it had to say:
This claim is misleading because AI can enhance critical thinking by providing diverse perspectives, data analysis, and automating routine tasks, allowing users to focus on higher-order reasoning. Critical thinking depends on how AI is used—passively accepting outputs may weaken it, but actively questioning, interpreting, and applying AI-generated insights can strengthen cognitive skills.
-
This post did not contain any content.
Based on what? Did they get 1000 people 10 years ago, test their critical thinking skills then retest after heavy AI use? How many people were in the control?
-
This post did not contain any content.
Pretty shit “study”. If workers use AI for a task, obviously the results will be less diverse. That doesn’t mean their critical thinking skills deteriorated. It means they used a tool that produces a certain outcome. This doesn’t test their critical thinking at all.
“Another noteworthy finding of the study: users who had access to generative AI tools tended to produce “a less diverse set of outcomes for the same task” compared to those without. That passes the sniff test. If you’re using an AI tool to complete a task, you’re going to be limited to what that tool can generate based on its training data. These tools aren’t infinite idea machines, they can only work with what they have, so it checks out that their outputs would be more homogenous. Researchers wrote that this lack of diverse outcomes could be interpreted as a “deterioration of critical thinking” for workers.”
-
Pretty shit “study”. If workers use AI for a task, obviously the results will be less diverse. That doesn’t mean their critical thinking skills deteriorated. It means they used a tool that produces a certain outcome. This doesn’t test their critical thinking at all.
“Another noteworthy finding of the study: users who had access to generative AI tools tended to produce “a less diverse set of outcomes for the same task” compared to those without. That passes the sniff test. If you’re using an AI tool to complete a task, you’re going to be limited to what that tool can generate based on its training data. These tools aren’t infinite idea machines, they can only work with what they have, so it checks out that their outputs would be more homogenous. Researchers wrote that this lack of diverse outcomes could be interpreted as a “deterioration of critical thinking” for workers.”
That doesn’t mean their critical thinking skills deteriorated. It means they used a tool that produces a certain outcome.
Dunning, meet Kruger
-
Corporations and politicians: oh great news everyone... It worked. Time to kick off phase 2...
- Replace all the water trump wasted in California with brawndo
- Sell mortgages for eggs, but call them patriot pods
- Welcome to Costco, I love you
- All medicine replaced with raw milk enemas
- Handjobs at Starbucks
- Ow my balls, Tuesdays this fall on CBS
- Chocolate rations have gone up from 10 to 6
- All government vehicles are cybertrucks
- trump nft cartoons on all USD, incest legal, Ivanka new first lady.
- Public executions on pay per view, lowered into deep fried turkey fryer on white house lawn, your meat is then mixed in with the other mechanically separated protein on the Tyson foods processing line (run exclusively by 3rd graders) and packaged without distinction on label.
- FDA doesn't inspect food or drugs. Everything approved and officially change acronym to F(uck You) D(umb) A(ss)
- Handjobs at Starbucks
Well that's just solid policy right there, cum on.
-
This post did not contain any content.
You mean an AI that literally generated text based on applying a mathematical function to input text doesn't do reasoning for me? (/s)
I'm pretty certain every programmer alive knew this was coming as soon as we saw people trying to use it years ago.
It's funny because I never get what I want out of AI. I've been thinking this whole time "am I just too dumb to ask the AI to do what I need?" Now I'm beginning to think "am I not dumb enough to find AI tools useful?"
-
Remember the:
Personal computers were “bicycles for the mind.”
I guess with AI and social media it's more like melting your mind or something. I can't find another analogy. Like a baseball bat to your leg for the mind doesn't roll off the tongue.
I know Primeagen has turned off copilot because he said the "copilot pause" daunting and affects how he codes.
Cars for the mind.
Cars are killing people.
-
This post did not contain any content.
Critical thinking skills are what hold me back from relying on ai
-
Really? I just asked ChatGPT and this is what it had to say:
This claim is misleading because AI can enhance critical thinking by providing diverse perspectives, data analysis, and automating routine tasks, allowing users to focus on higher-order reasoning. Critical thinking depends on how AI is used—passively accepting outputs may weaken it, but actively questioning, interpreting, and applying AI-generated insights can strengthen cognitive skills.
Not sure if sarcasm..
-
Memorization is not the same thing as critical thinking.
A library of internalized axioms is necessary for efficient critical thinking. You can't just turn yourself into a Chinese Room of analysis.
A well designed test will freely give you an equation sheet or even allow a cheat sheet.
Certain questions are phrased to force the reader to pluck out and categorize bits of information, to implement complex iterations of simple formulae, and to perform long-form calculations accurately without regard to the formulae themselves.
But for elementary skills, you're often challenging the individual to retain basic facts and figures. Internalizing your multiplication tables can serve as a heuristic that's quicker than doing simple sums in your head. Knowing the basic physics formulae - your F = ma, ρ=m/V, f= V/λ etc - can give you a broader understanding of the physical world.
If all you know how to do is search for answers to basic questions, you're slowing down your ability to process new information and recognize patterns or predictive signals in a timely manner.
I agree with all of this. My comment is meant to refute the implication that not needing to memorize phone numbers is somehow analogous to critical thinking. And yes, internalized axioms are necessary, but largely the core element is memorizing how these axioms are used, not necessarily their rote text.
-
Sounds a bit bogus to call this a causation. Much more likely that people who are more gullible in general also believe AI whatever it says.
Seriously, ask AI about anything you are actually expert in. it's laughable sometimes... However you need to know, to know it's wrong. Do not trust it implicitly about anything.
-
This post did not contain any content.
When it was new to me I tried ChatGPT out of curiosity, like with why tech, and I just kept getting really annoyed at the expansive bullshit it gave to the simplest of input. "Give me a list of 3 X" lead to fluff-filled paragraphs for each. The bastard children of a bad encyclopedia and the annoying kid in school.
I realized I was understanding it wrong, and it was supposed to be understood not as a useful tool, but as close to interacting with a human, pointless prose and all. That just made me more annoyed. It still blows my mind people say they use it when writing.
-
When it was new to me I tried ChatGPT out of curiosity, like with why tech, and I just kept getting really annoyed at the expansive bullshit it gave to the simplest of input. "Give me a list of 3 X" lead to fluff-filled paragraphs for each. The bastard children of a bad encyclopedia and the annoying kid in school.
I realized I was understanding it wrong, and it was supposed to be understood not as a useful tool, but as close to interacting with a human, pointless prose and all. That just made me more annoyed. It still blows my mind people say they use it when writing.
How else can the "elite" seperate themselces from the common folk? The elite loves writing 90% fluff and require high word counts in academia instead of actually making consise, clear, articulate articles that are easy to understand. You have to have a certain word count to qualify for "good writing" in any elite group. Look at Law, Political Science, History, Scientific Journals, etc. I had professor who would tell me they could easily find the information in the articles they needed and that one day we would be able to as well. That's why ChatGPT spits out a shit ton of fluff.
-
This post did not contain any content.
never used it in any practical function. i tested it to see if it was realistic and i found it extremely wanting. as in, it sounded nothing like the prompts i gave it.
-
The same could be said about people who search for answers anywhere on the internet, or even the world, and don’t have some level of skepticism about their sources of information.
It’s more like, not having critical thinking skills perpetuates a lack of critical thinking skills.
Yeah, if you repeated this test with the person having access to a stack exchange or not you'd see the same results. Not much difference between someone mindlessly copying an answer from stack overflow vs copying it from AI. Both lead to more homogeneous answers and lower critical thinking skills.
-
This post did not contain any content.
The only beneficial use I've had for "AI" (LLMs) has just been rewriting text, whether that be to re-explain a topic based on a source, or, for instance, sort and shorten/condense a list.
Everything other than that has been completely incorrect, unreadably long, context-lacking slop.
-
This post did not contain any content.
Weren't these assholes just gung-ho about forcing their shitty "AI" chatbots on us like ten minutes ago?
Microsoft can go fuck itself right in the gates. -
- Handjobs at Starbucks
Well that's just solid policy right there, cum on.
It would wake me up more than coffee that's for sure