Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills
-
The ability of AI to write things with lots of boilerplate like Kubernetes manifests is astounding. It gets me 90-95% of the way there and saves me about 50% of my development time. I still have to understand the result before deployment because I'm not going to blindly deploy something that AI wrote and it rarely works without modifications, but it definitely cuts my development time significantly.
-
Do you want the entire article in the headline or something? Go read the article and the journal article that it cites. They expand upon all of those terms.
Also, I'm genuinely curious, what do you mean when you say that there is "No such thing AS "AI""?
-
Tinfoil hat me goes straight to: make the population dumber and they’re easier to manipulate.
It’s insane how people take LLM output as gospel. It’s a TOOL just like every other piece of technology.
-
I mostly use it for wordy things like filing out review forms HR make us do and writing templates for messages to customers
-
Exactly. It’s great for that, as long as you know what you want it to say and can verify it.
The issue is people who don’t critically think about the data they get from it, who I assume are the same type to forward Facebook memes as fact.
It’s a larger problem, where convenience takes priority over actually learning and understanding something yourself.
-
As you mentioned tho, not really specific to LLMs at all
-
Yeah it’s just escalating the issue due to its universal availability. It’s being used in lieu of Google by many people, who blindly trust whatever it spits out.
If it had a high technological floor of entry, it wouldn’t be as influential to the general public as it is.
-
Linux study, finds that relying on MS kills critical thinking skills.
-
Microsoft said it so I guess it must be true then
️
-
It's such a double edged sword though, Google is a good example, I became a netizen at a very young age and learned how to properly search for information over time.
Unfortunately the vast majority of the population over the last two decades have not put in that effort, and it shows lol.
Fundamentally, I do not believe in arbitrarily deciding who can and can not have access to information though.
-
Just try using AI for a complicated mechanical repair. For instance draining the radiator fluid in your specific model of car, chances are googles AI model will throw in steps that are either wrong, or unnecessary. If you turn off your brain while using AI, you're likely to make mistakes that will go unnoticed until the thing you did is business necessary. AI should be a tool like a straight edge, it has it's purpose and it's up to you the operator to make sure you got the edges squared(so to speak).
-
Is that it?
One of the things I like more about AI is that it explains to detail each command they output for you, granted, I am aware it can hallucinate, so if I have the slightest doubt about it I usually look in the web too (I use it a lot for Linux basic stuff and docker).
Some people would give a fuck about what it says and just copy & past unknowingly? Sure, that happened too in my teenage days when all the info was shared along many blogs and wikis...
As usual, it is not the AI tool who could fuck our critical thinking but ourselves.
-
The definition of critical thinking is not relying on only one source. Next rain will make you wet keep tuned.
-
I mean, leave it up the one of the greatest creative minds of all time to predict that our AI will be unpredictable and emotional. The man invented the communication satellite and wrote franchises that are still being lined up to make into major hollywood releases half a century later.
-
Can the full-size DeepSeek handle dice and numbers? I have been using the distilled 70b of DeepSeek, and it definitely doesn't understand how dice work, nor the ranges I set out in my ruleset. For example, a 1d100 being used to determine character class, with the classes falling into certain parts of the distribution. I did it this way, since some classes are intended to be rarer than others.
-
Well there's people that followed apple maps into lakes and other things so the precedent is there already(I have no doubt it also existed before that)
You would need to heavily regulate it and thats not happening anytime soon if ever
-
Because he has the knowledge and experience to completely understand the final product. It used an approach that he hadn't thought of, that is better suited to the problem.
-
Also your ability to search information on the web. Most people I've seen got no idea how to use a damn browser or how to search effectively, ai is gonna fuck that ability completely
-
Their reasoning seems valid - common sense says the less you do something the more your skill atrophies - but this study doesn't seem to have measured people's critical thinking skills. Apparently it was about how the subjects felt about their critical thinking skills. People who feel like they're good at a job might not feel as adequate when their job changes to evaluating how well AI did it. The study said they felt that they used their analytical skills less when they had confidence in the AI. This also happens when you get any assistant - as your confidence in them grows you scrutinize them less. But that doesn't mean you yourself become less skillful. The title saying AI use "kills" analytical skill is very clickbaity IMO.
-
I love how they created the term "hallucinate" instead of saying it fails or screws up.