Researchers surprised to find less-educated areas adopting AI writing tools faster
-
I'm not really sure what you mean. They are not perfect, and in fact it will usually reduce the quality of output for a skilled writer, but half of the adults in the US cant read and write at a sixth grade level, and LLMs are greatly improving their ability to solidify and convey their thoughts in a more understandable way.
-
The reason it feels like that is because it's addressed to someone who you don't know personally, even if you know them professionally. You never really know if a specific reference would offend them, if their dog just died, how "this email finds" them, etc...
And in the context of both of you doing your jobs, you shouldn't care. Its easier to get day-to-day stuff done with niceties even if it's hollow.
That's just the tone tho. People trying to insist they give a shit when everyone knows they don't is what bothers me. If you're firing someone don't sugar coat it.
-
You seriously need to look up gatekeeping because that's not what it means at all.
Also you are making stuff up. No one has ever been against learning Latin, it is always being seen as something that a sophisticated gentleman knows, literally the opposite of whatever random nonsense you're claiming right now.
-
Most people don't need to think
No they just don't do it. The world would be in a much better position if people engaged their brains occasionally.
-
Because they're not actually using the AI that way, to support them in their writing endeavors, they're just having the AI do the writing task for them.
-
I dont really think its fair to expect the barely literate to have writing endeavours. They are just trying to communicate without embarrassing themselves.
-
LLMs work by extrapolation, they can't output any better than the context you give them. They're used in completely inappropriate situations because they're dead easy and give very digestible content.
Your brain is the only thing in the universe that knows the context of what you're writing and why. At a sixth grade level, you could technically describe almost anything but it would be clunky and hard to read. But you don't need an LLM to fix that.
We've had tools for years that help with the technical details of writing (basic grammar, punctuation, and spelling). There are also already tools to help with phrasing and specifying a concept ("hey Google, define [X]" or "what's the word for when...").
This is more time consuming than an LLM, but guarantees that what you write is exactly what you intend to communicate. As a bonus, your reading comprehension gets better. You might remember that definition of [X] when you read it.
If you have access to those tools but can't/won't them then you'll never be able to effectively write. There's no magic substitute for literacy.
-
An AI can produce content that is higher quality than the prompts they are given, particularly for formulaic tasks. I do agree that it would be nice if everyone were more educated, but a large portion of the population will never get there. If simply denying them AI was going to result in a blossoming of self education it would have already happened by now.
-
It can't ever accurately convey any more information than you give it, it just guesses details to fill in. If you're doing something formulaic, then it guesses fairly accurately. But if you tell it "write a book report on Romeo and Juliet", it can only fill in generic details on what people generally say about the play; it sounds genuine but can't extract your thoughts.
Not to get too deep into the politics of it but there's no reason most people couldn't get there if we invested in their core education. People just work with what they're given, it's not a personal failure if they weren't taught these skills or have access to ways to improve them.
And not everyone has to be hyper-literate, if daily life can be navigated at a 6th grade level that's perfectly fine. Getting there isn't an insurmountable task, especially if you flex those cognitive muscles more. The main issue is that current AI doesn't improve these skills, it atrophies them.
It doesn't push back or use logical reasoning or seek context. Its specifically made to be quick and easy, the same as fast food. We'll be having intellectual equivalent of the diabetes epidemic if it gets widespread use.
-
It sounds like you are talking about use in education then, which is a different issue altogether.
You can and should set your AI to push back against poor reasoning and unsupported claims. They arnt very smart, but they will try.
-
I mean it's the same use; it's all literacy. It's about how much you depend on it and don't use your own brain. It might be for a mindless email today, but in 20 years the next generation can't read the news without running it through an LLM. They have no choice but to accept whatever it says because they never develop the skills to challenge it.
The models can never be totally fixed, the underlying technology isn't built for that. It doesn't have "knowledge" or "reasoning" at all. It approximates it by weighing your input against a model of how those words connect together and choosing a slightly random extension of them. Depending on the initial conditions, it might even give you a different answer for each run.
-
Is that any worse than people getting their world view from a talking head on 24 news, five second video clips on their phone, or a self curated selection of rage bait propaganda online? The mental decline of humanity is perpetual and overstated.
-
One bad thing doesn't make a different but also bad thing ok. And in my opinion it is worse, imagine if their world view could only come from 5 second videos. Throw those history books away.
And I don't know that it's overstated and it's not at all perpetual. Look at... everything these days. People "disagree" with fundamental facts and are blindly allowing our planet to be burnt to the ground.
It takes concentrated effort to build and maintain an educated populace. The wide availability of books and increased literacy directly caused the Renaissance, pulling down the status quo and giving us access to modern medicine and literally every right + luxury you enjoy today.
-
Those people who don’t want to think need to be doing manual labor that doesn’t require thought.
-
There is published research that using AI makes people worse at critical thinking. It’s not gatekeeping, it’s a legitimate concern.
-
If they can’t think or write on their own then what is their value? Why not just go straight to the LLM and cut out the middle man?
-
I mean, books did make us worse at memorizing. I think its give and take. There are some things that are good to cognitively offload to an AI.
-
I do agree that there are tasks that are good to offload to AI. I don’t believe that reading and writing should be. AI can be a great tool. Ironically, since you mentioned memorization, I can’t possibly retain 100% the information I’ve learned in career and so using LLMs to point to the correct documentation or to create some boilerplate has greatly improved my productivity.
I’ve used AI as a conversational tool to assist in finding legitimate information to answer search queries (not just accept its output at face value) and generating boilerplate code (and not just using it as another stack overflow and copying and paste the code it gives you without understanding). The challenge is that if we try to replace 100% of the task of communication or research or coding, you eventually lose those skills. And I worry for Jrs who are just building those skills but have totally relied on AI to do the work that’s supposed to teach them those skills.
-
You mean people who haven't been taught to write quickly, easily, and with their own style tend to look to automate writing faster than those who write better than ai, can do so quickly, and have the proficiency to see it as a form of self expression?
Shocked I tell you. Also not surprised AI researchers are surprised