Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills
-
As you mentioned tho, not really specific to LLMs at all
Yeah it’s just escalating the issue due to its universal availability. It’s being used in lieu of Google by many people, who blindly trust whatever it spits out.
If it had a high technological floor of entry, it wouldn’t be as influential to the general public as it is.
-
This post did not contain any content.
Linux study, finds that relying on MS kills critical thinking skills.
-
This post did not contain any content.
Microsoft said it so I guess it must be true then
️
-
Yeah it’s just escalating the issue due to its universal availability. It’s being used in lieu of Google by many people, who blindly trust whatever it spits out.
If it had a high technological floor of entry, it wouldn’t be as influential to the general public as it is.
It's such a double edged sword though, Google is a good example, I became a netizen at a very young age and learned how to properly search for information over time.
Unfortunately the vast majority of the population over the last two decades have not put in that effort, and it shows lol.
Fundamentally, I do not believe in arbitrarily deciding who can and can not have access to information though.
-
This post did not contain any content.
Just try using AI for a complicated mechanical repair. For instance draining the radiator fluid in your specific model of car, chances are googles AI model will throw in steps that are either wrong, or unnecessary. If you turn off your brain while using AI, you're likely to make mistakes that will go unnoticed until the thing you did is business necessary. AI should be a tool like a straight edge, it has it's purpose and it's up to you the operator to make sure you got the edges squared(so to speak).
-
This post did not contain any content.
Is that it?
One of the things I like more about AI is that it explains to detail each command they output for you, granted, I am aware it can hallucinate, so if I have the slightest doubt about it I usually look in the web too (I use it a lot for Linux basic stuff and docker).
Some people would give a fuck about what it says and just copy & past unknowingly? Sure, that happened too in my teenage days when all the info was shared along many blogs and wikis...
As usual, it is not the AI tool who could fuck our critical thinking but ourselves.
-
This post did not contain any content.
The definition of critical thinking is not relying on only one source. Next rain will make you wet keep tuned.
-
It makes HAL 9000 from 2001: A Space Odyessy seem realistic. In the movie he is a highly technical AI but doesn't understand the implications of what he wants to do. He sees Dave as a detriment to the mission and it can be better accomplished without him... not stopping to think about the implications of what he is doing.
I mean, leave it up the one of the greatest creative minds of all time to predict that our AI will be unpredictable and emotional. The man invented the communication satellite and wrote franchises that are still being lined up to make into major hollywood releases half a century later.
-
I've spent all week working with DeepSeek to write DnD campaigns based on artifacts from the game Dark Age of Camelot. This week was just on one artifact.
AI/LLMs are great for bouncing ideas off of and using it to tweak things. I gave it a prompt on what I was looking for (the guardian of dusk steps out and says: "the dawn brings the warmth of the sun, and awakens the world. So does your trial begin." He is a druid and the party is a party of 5 level 1 players. Give me a stat block and XP amount for this situation.
I had it help me fine tune puzzle and traps. Fine tune the story behind everything and fine tune the artifact at the end (it levels up 5 levels as the player does specific things to gain leveling points for just the item).
I also ran a short campaign with it as the DM. It did a great job at acting out the different NPCs that it created and adjusting to both the tone and situation of the campaign. It adjusted pretty good to what I did as well.
Can the full-size DeepSeek handle dice and numbers? I have been using the distilled 70b of DeepSeek, and it definitely doesn't understand how dice work, nor the ranges I set out in my ruleset. For example, a 1d100 being used to determine character class, with the classes falling into certain parts of the distribution. I did it this way, since some classes are intended to be rarer than others.
-
Just try using AI for a complicated mechanical repair. For instance draining the radiator fluid in your specific model of car, chances are googles AI model will throw in steps that are either wrong, or unnecessary. If you turn off your brain while using AI, you're likely to make mistakes that will go unnoticed until the thing you did is business necessary. AI should be a tool like a straight edge, it has it's purpose and it's up to you the operator to make sure you got the edges squared(so to speak).
Well there's people that followed apple maps into lakes and other things so the precedent is there already(I have no doubt it also existed before that)
You would need to heavily regulate it and thats not happening anytime soon if ever
-
how does he know that the solution is elegant and appropriate?
Because he has the knowledge and experience to completely understand the final product. It used an approach that he hadn't thought of, that is better suited to the problem.
-
This post did not contain any content.
Also your ability to search information on the web. Most people I've seen got no idea how to use a damn browser or how to search effectively, ai is gonna fuck that ability completely
-
This post did not contain any content.
Their reasoning seems valid - common sense says the less you do something the more your skill atrophies - but this study doesn't seem to have measured people's critical thinking skills. Apparently it was about how the subjects felt about their critical thinking skills. People who feel like they're good at a job might not feel as adequate when their job changes to evaluating how well AI did it. The study said they felt that they used their analytical skills less when they had confidence in the AI. This also happens when you get any assistant - as your confidence in them grows you scrutinize them less. But that doesn't mean you yourself become less skillful. The title saying AI use "kills" analytical skill is very clickbaity IMO.
-
Is that it?
One of the things I like more about AI is that it explains to detail each command they output for you, granted, I am aware it can hallucinate, so if I have the slightest doubt about it I usually look in the web too (I use it a lot for Linux basic stuff and docker).
Some people would give a fuck about what it says and just copy & past unknowingly? Sure, that happened too in my teenage days when all the info was shared along many blogs and wikis...
As usual, it is not the AI tool who could fuck our critical thinking but ourselves.
I love how they created the term "hallucinate" instead of saying it fails or screws up.
-
Can the full-size DeepSeek handle dice and numbers? I have been using the distilled 70b of DeepSeek, and it definitely doesn't understand how dice work, nor the ranges I set out in my ruleset. For example, a 1d100 being used to determine character class, with the classes falling into certain parts of the distribution. I did it this way, since some classes are intended to be rarer than others.
I ran a campaign by myself with 2 of my characters. I had DS act as DM. It seemed to handle it all perfectly fine. I tested it later and gave it scenarios. I asked it to roll the dice and show all its work. Dice rolls, any bonuses, any advantage/disadvantage. It got all of it right.
I then tested a few scenarios to check and see if it would follow the rules as they are supposed to be from 5e. It got all of that correct as well. It did give me options as if the rules were corrected (I asked it to roll damage as a barbarian casting fireball, it said barbs couldn't, but gave me reasons that would allow exceptions).
What it ended up flubbing on later was forgetting the proper initiative order. I had to remind it a couple times that it messed it up. This only happened way later in the campaign. So I think I was approaching the limits of its memory window.
I tried the distilled locally. It didn't even realize I was asking it to DM. It just repeating the outline of the campaign.
-
I ran a campaign by myself with 2 of my characters. I had DS act as DM. It seemed to handle it all perfectly fine. I tested it later and gave it scenarios. I asked it to roll the dice and show all its work. Dice rolls, any bonuses, any advantage/disadvantage. It got all of it right.
I then tested a few scenarios to check and see if it would follow the rules as they are supposed to be from 5e. It got all of that correct as well. It did give me options as if the rules were corrected (I asked it to roll damage as a barbarian casting fireball, it said barbs couldn't, but gave me reasons that would allow exceptions).
What it ended up flubbing on later was forgetting the proper initiative order. I had to remind it a couple times that it messed it up. This only happened way later in the campaign. So I think I was approaching the limits of its memory window.
I tried the distilled locally. It didn't even realize I was asking it to DM. It just repeating the outline of the campaign.
It is good to hear what a full DeepSeek can do. I am really looking forward to having a better, localized version in 2030. Thank you for relating your experience, it is helpful.
-
Also your ability to search information on the web. Most people I've seen got no idea how to use a damn browser or how to search effectively, ai is gonna fuck that ability completely
Gen Zs are TERRIBLE at searching things online in my experience. I’m a sweet spot millennial, born close to the middle in 1987. Man oh man watching the 22 year olds who work for me try to google things hurts my brain.
-
It is good to hear what a full DeepSeek can do. I am really looking forward to having a better, localized version in 2030. Thank you for relating your experience, it is helpful.
I'm anxious to see it as well. I would love to see something like this implemented into games, and focused solely on whatever game it's in. I imagine something like Skyrim but with a LLM on every character, or at least the main ones. I downloaded the mod that adds it to Skyrim now, but I haven't had the chance to play with it. It does require prompts for the NPC to let you know you're talking to it. I'd love to see a natural thing. Even NPCs carrying out their own natural conversations with each other and not with the PC.
I've also been watching the Vivaladirt people. We need a 4th wall breaking npc in every game when we get a llm like above.
-
Just try using AI for a complicated mechanical repair. For instance draining the radiator fluid in your specific model of car, chances are googles AI model will throw in steps that are either wrong, or unnecessary. If you turn off your brain while using AI, you're likely to make mistakes that will go unnoticed until the thing you did is business necessary. AI should be a tool like a straight edge, it has it's purpose and it's up to you the operator to make sure you got the edges squared(so to speak).
I think, this is only a issue in the beginning, people will sooner or later realise that they can’t blindly trust an LMM output and how to create prompts to verify prompts (or better said prove that not enough relevant data was analysed and prove that it is hallucinations)
-
You can either use AI to just vomit dubious information at you or you can use it as a tool to do stuff. The more specific the task, the better LLMs work. When I use LLMs for highly specific coding tasks that I couldn't do otherwise (I'm not a [good] coder), it does not make me worse at critical thinking.
I actually understand programming much better because of LLMs. I have to debug their code, do research so I know how to prompt it best to get what I want, do research into programming and software design principles, etc.
I literally created an iOS app with zero experience and distributed it on the App Store. AI is an amazing tool and will continue to get better. Many people bash the technology but it seems like those people misunderstand it or think it’s all bad.
But I agree that relying on it to think for you is not a good thing.