Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills
-
You mean an AI that literally generated text based on applying a mathematical function to input text doesn't do reasoning for me? (/s)
I'm pretty certain every programmer alive knew this was coming as soon as we saw people trying to use it years ago.
It's funny because I never get what I want out of AI. I've been thinking this whole time "am I just too dumb to ask the AI to do what I need?" Now I'm beginning to think "am I not dumb enough to find AI tools useful?"
-
Cars for the mind.
Cars are killing people.
-
Critical thinking skills are what hold me back from relying on ai
-
Not sure if sarcasm..
-
I agree with all of this. My comment is meant to refute the implication that not needing to memorize phone numbers is somehow analogous to critical thinking. And yes, internalized axioms are necessary, but largely the core element is memorizing how these axioms are used, not necessarily their rote text.
-
Seriously, ask AI about anything you are actually expert in. it's laughable sometimes... However you need to know, to know it's wrong. Do not trust it implicitly about anything.
-
When it was new to me I tried ChatGPT out of curiosity, like with why tech, and I just kept getting really annoyed at the expansive bullshit it gave to the simplest of input. "Give me a list of 3 X" lead to fluff-filled paragraphs for each. The bastard children of a bad encyclopedia and the annoying kid in school.
I realized I was understanding it wrong, and it was supposed to be understood not as a useful tool, but as close to interacting with a human, pointless prose and all. That just made me more annoyed. It still blows my mind people say they use it when writing.
-
How else can the "elite" seperate themselces from the common folk? The elite loves writing 90% fluff and require high word counts in academia instead of actually making consise, clear, articulate articles that are easy to understand. You have to have a certain word count to qualify for "good writing" in any elite group. Look at Law, Political Science, History, Scientific Journals, etc. I had professor who would tell me they could easily find the information in the articles they needed and that one day we would be able to as well. That's why ChatGPT spits out a shit ton of fluff.
-
never used it in any practical function. i tested it to see if it was realistic and i found it extremely wanting. as in, it sounded nothing like the prompts i gave it.
-
Yeah, if you repeated this test with the person having access to a stack exchange or not you'd see the same results. Not much difference between someone mindlessly copying an answer from stack overflow vs copying it from AI. Both lead to more homogeneous answers and lower critical thinking skills.
-
The only beneficial use I've had for "AI" (LLMs) has just been rewriting text, whether that be to re-explain a topic based on a source, or, for instance, sort and shorten/condense a list.
Everything other than that has been completely incorrect, unreadably long, context-lacking slop.
-
Weren't these assholes just gung-ho about forcing their shitty "AI" chatbots on us like ten minutes ago?
Microsoft can go fuck itself right in the gates. -
It would wake me up more than coffee that's for sure
-
Garbage in, Garbage out. Ingesting all that internet blather didn't make the ai smarter by much if anything.
-
You can either use AI to just vomit dubious information at you or you can use it as a tool to do stuff. The more specific the task, the better LLMs work. When I use LLMs for highly specific coding tasks that I couldn't do otherwise (I'm not a [good] coder), it does not make me worse at critical thinking.
I actually understand programming much better because of LLMs. I have to debug their code, do research so I know how to prompt it best to get what I want, do research into programming and software design principles, etc.
-
I grew up as a kid without the internet. Google on your phone and youtube kills your critical thinking skills.
-
I agree with the output for legitimate reasons but it's not black and white wrong or right. I think it's wildly misjudged and while there plenty of valid reasons behind that I still think there is much to be had for what AI in general can do for us on a whole and individual basis.
Today I had it analyze 8 medical documents, told it to provide analysis, cross reference its output with scientific studies including sources, and other lengthy queries. These documents are dealing with bacterial colonies and multiple GI and bodily systems on a per document basis in great length. Some of the most advanced testing science offers.
It was able to not only provide me with accurate numbers that I fact checked from my documents side by side but explain methods to counter multi faceted systemic issues that matched multiple specialty Dr.s. Which is fairly impressive given to see a Dr takes 3 to 9 months or longer, who may or may not give a shit, over worked and understaffed, pick your reasoning.
While I tried having it scan from multiple fresh blank chat tabs and even different computers to really test it out for testing purposes.
Overall some of the numbers were off say 3 or 4 individual colony counts across all 8 documents. I corrected the values, told it that it was incorrect and to reasses giving it more time and ensuring accuracy, supplied a bit more context about how to understand the tables and I mean broad context such as page 6 shows gene expression use this as reference to find all underlying issues as it isnt a mind reader. It managed to fairly accurately identify the dysbiosis and other systemic issues with reasonable accuracy on par with physicians I have worked with. Dealing with antibiotic gene resistant analysis it was able to find multiple approaches to therapies to fight antibiotic gene resistant bacteria in a fraction of the time it would take for a human to study.
I would not bet my life solely on the responses as it's far from perfected and as always with any info it should be cross referenced and fact checked through various sources. But those who speak such ill towards the usage while there is valid points I find unfounded. My 2 cents.
-
Copying isn't the same as using your brain to form logical conclusions. Instead your taking someone else's wild interpretation, research, study, and blindly copying it as fact. That lowers critical thinking because your not thinking at all. Bad information is always bad no matter how far it spreads. Incomplete info is no different.
-
Like any tool, it's only as good as the person wielding it.
-
I use a bespoke model to spin up pop quizzes, and I use NovelAI for fun.
Legit, being able to say "I want these questions. But... not these..." and get them back in a moment's notice really does let me say "FUCK it. Pop quiz. Let's go, class." And be ready with brand new questions on the board that I didn't have before I said that sentence. NAI is a good way to turn writing into an interactive DnD session, and is a great way to force a ram through writer's block, with a "yeah, and—!" machine.