Researchers surprised to find less-educated areas adopting AI writing tools faster
-
AI doesn't really "summarize" though, it just chooses random topics to filter out.
-
5 Absätze
"Paragraphs" is the English word you were probably looking for
-
Is that why all the executives and directors at my giant tech company are pushing AI? Fuckwits...
-
Of course I could, but I don’t want to
🤪
-
-
Hey this that you're doing is called gate keeping.
We got multiple versions of these every time a new tech comes along.
People defending typewriters. Or learning Latin. Or something better than a quill and jar of ink. Or paper being affordable.
Just. Stop.
-
This isn't quite the same thing. If we were talking a tool like augmented audio to text I'd agree. I'd probably even agree if it was an AI-proofreader style model where you feed it what you have to make sure it's generally comprehensible.
Writing as a skill is about solidifying and conveying thoughts so they can be understood. The fact that it turns into text is kind of irrelevant. Hand waving that process is just rubber stamping something you kinda-sorta started the process of maybe thinking about.
-
I'm not really sure what you mean. They are not perfect, and in fact it will usually reduce the quality of output for a skilled writer, but half of the adults in the US cant read and write at a sixth grade level, and LLMs are greatly improving their ability to solidify and convey their thoughts in a more understandable way.
-
The reason it feels like that is because it's addressed to someone who you don't know personally, even if you know them professionally. You never really know if a specific reference would offend them, if their dog just died, how "this email finds" them, etc...
And in the context of both of you doing your jobs, you shouldn't care. Its easier to get day-to-day stuff done with niceties even if it's hollow.
That's just the tone tho. People trying to insist they give a shit when everyone knows they don't is what bothers me. If you're firing someone don't sugar coat it.
-
You seriously need to look up gatekeeping because that's not what it means at all.
Also you are making stuff up. No one has ever been against learning Latin, it is always being seen as something that a sophisticated gentleman knows, literally the opposite of whatever random nonsense you're claiming right now.
-
Most people don't need to think
No they just don't do it. The world would be in a much better position if people engaged their brains occasionally.
-
Because they're not actually using the AI that way, to support them in their writing endeavors, they're just having the AI do the writing task for them.
-
I dont really think its fair to expect the barely literate to have writing endeavours. They are just trying to communicate without embarrassing themselves.
-
LLMs work by extrapolation, they can't output any better than the context you give them. They're used in completely inappropriate situations because they're dead easy and give very digestible content.
Your brain is the only thing in the universe that knows the context of what you're writing and why. At a sixth grade level, you could technically describe almost anything but it would be clunky and hard to read. But you don't need an LLM to fix that.
We've had tools for years that help with the technical details of writing (basic grammar, punctuation, and spelling). There are also already tools to help with phrasing and specifying a concept ("hey Google, define [X]" or "what's the word for when...").
This is more time consuming than an LLM, but guarantees that what you write is exactly what you intend to communicate. As a bonus, your reading comprehension gets better. You might remember that definition of [X] when you read it.
If you have access to those tools but can't/won't them then you'll never be able to effectively write. There's no magic substitute for literacy.
-
An AI can produce content that is higher quality than the prompts they are given, particularly for formulaic tasks. I do agree that it would be nice if everyone were more educated, but a large portion of the population will never get there. If simply denying them AI was going to result in a blossoming of self education it would have already happened by now.
-
It can't ever accurately convey any more information than you give it, it just guesses details to fill in. If you're doing something formulaic, then it guesses fairly accurately. But if you tell it "write a book report on Romeo and Juliet", it can only fill in generic details on what people generally say about the play; it sounds genuine but can't extract your thoughts.
Not to get too deep into the politics of it but there's no reason most people couldn't get there if we invested in their core education. People just work with what they're given, it's not a personal failure if they weren't taught these skills or have access to ways to improve them.
And not everyone has to be hyper-literate, if daily life can be navigated at a 6th grade level that's perfectly fine. Getting there isn't an insurmountable task, especially if you flex those cognitive muscles more. The main issue is that current AI doesn't improve these skills, it atrophies them.
It doesn't push back or use logical reasoning or seek context. Its specifically made to be quick and easy, the same as fast food. We'll be having intellectual equivalent of the diabetes epidemic if it gets widespread use.
-
It sounds like you are talking about use in education then, which is a different issue altogether.
You can and should set your AI to push back against poor reasoning and unsupported claims. They arnt very smart, but they will try.
-
I mean it's the same use; it's all literacy. It's about how much you depend on it and don't use your own brain. It might be for a mindless email today, but in 20 years the next generation can't read the news without running it through an LLM. They have no choice but to accept whatever it says because they never develop the skills to challenge it.
The models can never be totally fixed, the underlying technology isn't built for that. It doesn't have "knowledge" or "reasoning" at all. It approximates it by weighing your input against a model of how those words connect together and choosing a slightly random extension of them. Depending on the initial conditions, it might even give you a different answer for each run.
-
Is that any worse than people getting their world view from a talking head on 24 news, five second video clips on their phone, or a self curated selection of rage bait propaganda online? The mental decline of humanity is perpetual and overstated.
-
One bad thing doesn't make a different but also bad thing ok. And in my opinion it is worse, imagine if their world view could only come from 5 second videos. Throw those history books away.
And I don't know that it's overstated and it's not at all perpetual. Look at... everything these days. People "disagree" with fundamental facts and are blindly allowing our planet to be burnt to the ground.
It takes concentrated effort to build and maintain an educated populace. The wide availability of books and increased literacy directly caused the Renaissance, pulling down the status quo and giving us access to modern medicine and literally every right + luxury you enjoy today.