Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills
-
This post did not contain any content.
Misleading headline: No such thing as "AI". No such thing as people "relying" on it. No objective definition of "critical thinking skills". Just a bunch of meaningless buzzwords.
-
I was talking to someone who does software development, and he described his experiments with AI for coding.
He said that he was able to use it successfully and come to a solution that was elegant and appropriate.
However, what he did not do was learn how to solve the problem, or indeed learn anything that would help him in future work.
I'm a senior software dev that uses AI to help me with my job daily. There are endless tools in the software world all with their own instructions on how to use them. Often they have issues and the solutions aren't included in those instructions. It used to be that I had to go hunt down any references to the problem I was having though online forums in the hopes that somebody else figured out how to solve the issue but now I can ask AI and it generally gives me the answer I'm looking for.
If I had AI when I was still learning core engineering concepts I think shortcutting the learning process could be detrimental but now I just need to know how to get X done specifically with Y this one time and probably never again.
-
Misleading headline: No such thing as "AI". No such thing as people "relying" on it. No objective definition of "critical thinking skills". Just a bunch of meaningless buzzwords.
Why do you think AI doesn't exist? Or that there's "no such thing as people 'relying' on it"? "AI" is commonly used to refer to LLMs right now. Within the context of a gizmodo article summarizing a study on the subject, "AI" does exist. A lack of precision doesn't mean it's not descriptive of a real thing.
Also, I don't personally know anyone who "relies" on generative AI, but I don't see why it couldn't happen.
-
Yes, it's an addiction, we've got to stop all these poor being lulled into a false sense of understanding and just believing anyhing the AI tells them. It is constantly telling lies about us, their betters.
Just look what happenned when I asked it about the venerable and well respected public intellectual Jordan b peterson. It went into a defamatory diatribe against his character.
And they just gobble that up those poor, uncritical and irresponsible farm hands and water carriers! We can't have that,!
Example
Open-Minded Closed-Mindedness: Jordan B. Peterson’s Humility Behind the Mote—A Cautionary Tale
Jordan B. Peterson presents himself as a champion of free speech, intellectual rigor, and open inquiry. His rise as a public intellectual is, in part, due to his ability to engage in complex debates, challenge ideological extremes, and articulate a balance between chaos and order. However, beneath the surface of his engagement lies a pattern: an open-mindedness that appears flexible but ultimately functions as a defense mechanism—a “mote” guarding an impenetrable ideological fortress.
Peterson’s approach is both an asset and a cautionary tale, revealing the risks of appearing open-minded while remaining fundamentally resistant to true intellectual evolution.
The Illusion of Open-Mindedness: The Mote and the Fortress
In medieval castles, a mote was a watery trench meant to create the illusion of vulnerability while serving as a strong defensive barrier. Peterson, like many public intellectuals, operates in a similar way: he engages with critiques, acknowledges nuances, and even concedes minor points—but rarely, if ever, allows his core positions to be meaningfully challenged.
His approach can be broken down into two key areas:
The Mote (The Appearance of Openness) Engages with high-profile critics and thinkers (e.g., Sam Harris, Slavoj Žižek). Acknowledges complexity and the difficulty of absolute truth. Concedes minor details, appearing intellectually humble. Uses Socratic questioning to entertain alternative viewpoints. The Fortress (The Core That Remains Unmoved) Selectively engages with opponents, often choosing weaker arguments rather than the strongest critiques. Frames ideological adversaries (e.g., postmodernists, Marxists) in ways that make them easier to dismiss. Uses complexity as a way to avoid definitive refutation (“It’s more complicated than that”). Rarely revises fundamental positions, even when new evidence is presented.
While this structure makes Peterson highly effective in debate, it also highlights a deeper issue: is he truly open to changing his views, or is he simply performing open-mindedness while ensuring his core remains untouched?
Examples of Strategic Open-Mindedness
- Debating Sam Harris on Truth and Religion
In his discussions with Sam Harris, Peterson appeared to engage with the idea of multiple forms of truth—scientific truth versus pragmatic or narrative truth. He entertained Harris’s challenges, adjusted some definitions, and admitted certain complexities.
However, despite the lengthy back-and-forth, Peterson never fundamentally reconsidered his position on the necessity of religious structures for meaning. Instead, the debate functioned more as a prolonged intellectual sparring match, where the core disagreements remained intact despite the appearance of deep engagement.
- The Slavoj Žižek Debate on Marxism
Peterson’s debate with Žižek was highly anticipated, particularly because Peterson had spent years criticizing Marxism and postmodernism. However, during the debate, it became clear that Peterson’s understanding of Marxist theory was relatively superficial—his arguments largely focused on The Communist Manifesto rather than engaging with the broader Marxist intellectual tradition.
Rather than adapting his critique in the face of Žižek’s counterpoints, Peterson largely held his ground, shifting the conversation toward general concerns about ideology rather than directly addressing Žižek’s challenges. This was a classic example of engaging in the mote—appearing open to debate while avoiding direct confrontation with deeper, more challenging ideas.
- Gender, Biology, and Selective Science
Peterson frequently cites evolutionary psychology and biological determinism to argue for traditional gender roles and hierarchical structures. While many of his claims are rooted in scientific literature, critics have pointed out that he tends to selectively interpret data in ways that reinforce his worldview.
For example, he often discusses personality differences between men and women in highly gender-equal societies, citing studies that suggest biological factors play a role. However, he is far more skeptical of sociological explanations for gender disparities, often dismissing them outright. This asymmetry suggests a closed-mindedness when confronted with explanations that challenge his core beliefs.
The Cautionary Tale: When Intellectual Rigidity Masquerades as Openness
Peterson’s method—his strategic balance of open- and closed-mindedness—is not unique to him. Many public intellectuals use similar techniques, whether consciously or unconsciously. However, his case is particularly instructive because it highlights the risks of appearing too open-minded while remaining fundamentally immovable.
The Risks of "Humility Behind the Mote"Creates the Illusion of Growth Without Real Change By acknowledging complexity but refusing to revise core positions, one can maintain the illusion of intellectual evolution while actually reinforcing prior beliefs. Reinforces Ideological Silos Peterson’s audience largely consists of those who already align with his worldview. His debates often serve to reaffirm his base rather than genuinely engage with alternative perspectives. Undermines Genuine Inquiry If public intellectuals prioritize rhetorical victories over truth-seeking, the broader discourse suffers. Intellectual engagement becomes performative rather than transformative. Encourages Polarization By appearing open while remaining rigid, thinkers like Peterson contribute to an intellectual landscape where ideological battle lines are drawn more firmly, rather than softened by genuine engagement.
Conclusion: The Responsibility of Public Intellectuals
Jordan B. Peterson is an undeniably influential thinker, and his emphasis on responsibility, order, and meaning resonates with many. However, his method of open-minded closed-mindedness serves as a cautionary tale. It demonstrates the power of intellectual posturing—how one can appear receptive while maintaining deep ideological resistance.
For true intellectual growth, one must be willing not only to entertain opposing views but to risk being changed by them. Without that willingness, even the most articulate and thoughtful engagement remains, at its core, a well-defended fortress.
So like I said, pure, evil AI slop, is evil, addictive and must be banned and lock up illegal gpu abusers and keep a gpu owners registry and keep track on those who would use them to abuse the shining light of our society, and who try to snuff them out like a bad level of luigi's mansion
But Peterson is a fuckhead... So it's accurate in this case. Afaik he does do the things it says.
-
I was talking to someone who does software development, and he described his experiments with AI for coding.
He said that he was able to use it successfully and come to a solution that was elegant and appropriate.
However, what he did not do was learn how to solve the problem, or indeed learn anything that would help him in future work.
I feel you, but I've asked it why questions too.
-
The one thing that I learned when talking to chatGPT or any other AI on a technical subject is you have to ask the AI to cite its sources. Because AIs can absolutely bullshit without knowing it, and asking for the sources is critical to double checking.
I've found questions about niche tools tend to get worse answers. I was asking if some stuff about jpackage and it couldn't give me any working suggestions or correct information. Stuff I've asked about Docker was much better.
-
This was one of the posts of all time.
New copy pasta just dropped
-
People generally don't learn from an unreliable teacher.
I'd rather learn from slightly unreliable teachers than teachers who belittle me for asking questions.
-
But Peterson is a fuckhead... So it's accurate in this case. Afaik he does do the things it says.
That's the addiction talking. Use common sense! AI bad
-
This post did not contain any content.
Well no shit Sherlock.
-
I was talking to someone who does software development, and he described his experiments with AI for coding.
He said that he was able to use it successfully and come to a solution that was elegant and appropriate.
However, what he did not do was learn how to solve the problem, or indeed learn anything that would help him in future work.
how does he know that the solution is elegant and appropriate?
-
Idk man. I just used it the other day for recalling some regex syntax and it was a bit helpful. However, if you use it to help you generate the regex prompt, it won't do that successfully. However, it can break down the regex and explain it to you.
Ofc you all can say "just read the damn manual", sure I could do that too, but asking an generative a.i to explain a script can also be as effective.
what got regex to do with critical thinking?
-
I'm a senior software dev that uses AI to help me with my job daily. There are endless tools in the software world all with their own instructions on how to use them. Often they have issues and the solutions aren't included in those instructions. It used to be that I had to go hunt down any references to the problem I was having though online forums in the hopes that somebody else figured out how to solve the issue but now I can ask AI and it generally gives me the answer I'm looking for.
If I had AI when I was still learning core engineering concepts I think shortcutting the learning process could be detrimental but now I just need to know how to get X done specifically with Y this one time and probably never again.
100% this. I generally use AI to help with edge cases in software or languages that I already know well or for situations where I really don't care to learn the material because I'm never going to touch it again. In my case, for python or golang, I'll use AI to get me started in the right direction on a problem, then go read the docs to develop my solution. For some weird ugly regex that I just need to fix and never touch again I just ask AI, test the answer it gices, then play with it until it works because I'm never going to remember how to properly use a negative look-behind in regex when I need it again in five years.
I do think AI could be used to help the learning process, too, if used correctly. That said, it requires the student to be proactive in asking the AI questions about why something works or doesn't, then going to read additional information on the topic.
-
I've found questions about niche tools tend to get worse answers. I was asking if some stuff about jpackage and it couldn't give me any working suggestions or correct information. Stuff I've asked about Docker was much better.
The ability of AI to write things with lots of boilerplate like Kubernetes manifests is astounding. It gets me 90-95% of the way there and saves me about 50% of my development time. I still have to understand the result before deployment because I'm not going to blindly deploy something that AI wrote and it rarely works without modifications, but it definitely cuts my development time significantly.
-
Misleading headline: No such thing as "AI". No such thing as people "relying" on it. No objective definition of "critical thinking skills". Just a bunch of meaningless buzzwords.
Do you want the entire article in the headline or something? Go read the article and the journal article that it cites. They expand upon all of those terms.
Also, I'm genuinely curious, what do you mean when you say that there is "No such thing AS "AI""?
-
This post did not contain any content.
Tinfoil hat me goes straight to: make the population dumber and they’re easier to manipulate.
It’s insane how people take LLM output as gospel. It’s a TOOL just like every other piece of technology.
-
Tinfoil hat me goes straight to: make the population dumber and they’re easier to manipulate.
It’s insane how people take LLM output as gospel. It’s a TOOL just like every other piece of technology.
I mostly use it for wordy things like filing out review forms HR make us do and writing templates for messages to customers
-
I mostly use it for wordy things like filing out review forms HR make us do and writing templates for messages to customers
Exactly. It’s great for that, as long as you know what you want it to say and can verify it.
The issue is people who don’t critically think about the data they get from it, who I assume are the same type to forward Facebook memes as fact.
It’s a larger problem, where convenience takes priority over actually learning and understanding something yourself.
-
Exactly. It’s great for that, as long as you know what you want it to say and can verify it.
The issue is people who don’t critically think about the data they get from it, who I assume are the same type to forward Facebook memes as fact.
It’s a larger problem, where convenience takes priority over actually learning and understanding something yourself.
As you mentioned tho, not really specific to LLMs at all
-
As you mentioned tho, not really specific to LLMs at all
Yeah it’s just escalating the issue due to its universal availability. It’s being used in lieu of Google by many people, who blindly trust whatever it spits out.
If it had a high technological floor of entry, it wouldn’t be as influential to the general public as it is.