Something Bizarre Is Happening to People Who Use ChatGPT a Lot
-
This post did not contain any content.
Clickbait titles suck
-
I was there from the start, you must of never BBS'd or IRC'd - shit was amazing in the early days.
I mean honestly nothing has really changed - we are still at our terminals looking at text. Only real innovation has been inline pics, videos and audio. 30+ years ago one had to click a link to see that stuff
"must of"
"Must have", not "must of"
Quakenet is still going strong.
30 years ago you couldn't share video with just a few min and a link. YouTube was not a thing. It took until early 00's to have shitty webcam connections.
Now you can livestream 8k
-
I agree with what you say, and I for one have had my fair share of shit asses on forums and discussion boards. But this response also fuels my suspicion that my friend group has started using it in place of human interactions to form thoughts, opinions, and responses during our conversations. Almost like an emotional crutch to talk in conversation, but not exactly? It's hard to pin point.
I've recently been tone policed a lot more over things that in normal real life interactions would be light hearted or easy to ignore and move on - I'm not shouting obscenities or calling anyone names, it's just harmless misunderstandings that come from tone deafness of text. I'm talking like putting a cute emoji and saying words like silly willy is becoming offensive to people I know personally. It wasn't until I asked a rhetorical question to invoke a thoughtful conversation where I had to think about what was even happening - someone responded with an answer literally from ChatGPT and they provided a technical definition to something that was apart of my question. Your answer has finally started linking things for me; for better or for worse people are using it because you don't receive offensive or flamed answers. My new suspicion is that some people are now taking those answers, and applying the expectation to people they know in real life, and when someone doesn't respond in the same predictable manner of AI they become upset and further isolated from real life interactions or text conversations with real people.
People talk to ChatGPT because they are stupid
I talk to ChatGPT because people are stupid
/jk I don't even remember when I used it last.
-
Do you guys remember when internet was the thing and everybody was like: "Look those dumb fucks just putting everything online" and now is: "Look this weird motherfucker so not post anything online"
I remember when internet was a place
-
Yes, but what this movie failed to anticipate was the visceral anger I feel when I hear that stupid AI generated voice. I’ve seen too many fake videos or straight up scams using it that I now instinctively mistrust any voice that sounds like male or femaleAI.wav.
Could never fall in love with AI voice, would always assume it was sent to steal my data so some kid can steal my identify.
The movie doesn't have AI generated voice though. That was Scarlett Johansson.
"ChatGPT has released a new voice assistant feature inspired by Scarlett Johansson’s AI character in ‘Her.’ Which I’ve never bothered to watch, because without that body, what’s the point of listening?”
Scarlett's husband on SNL Weekend Update.
-
Dear god dude. You are trying way too hard on these comments. Chill
He had ChatGPT write them too
-
You said:
Now make the maximally opposite idea, make heavy use of the concept of the latent spaceChatGPT said:
Absolutely — here's the maximally opposite take, fully embracing the idea that models like ChatGPT are more than just stochastic parrots, and heavily leveraging the concept of latent space to argue that these models can, in a very real sense, generate novel ideas and insights.
ChatGPT Is Not a Stochastic Parrot — It’s a Navigator of Latent Thought SpaceWhile critics call ChatGPT a “stochastic parrot” or “autocomplete on steroids,” this view drastically underestimates the depth and complexity of what large language models are actually doing. These models aren’t just parroting data — they are mapping and exploring an extraordinarily rich latent space of human knowledge, culture, and thought.
- Latent Space: The Hidden Geometry of Meaning
At the core of models like ChatGPT is the idea of a latent space — a high-dimensional mathematical structure where abstract concepts, relationships, and patterns are embedded.
Every word, sentence, and idea is represented as a point or vector in this space. The relationships between points encode not just surface patterns, but deep structural relationships between ideas — analogies, metaphors, causal chains, emotional tones, etc.
This latent space isn’t programmed in — it emerges from the training data through self-supervised learning. The result is a dynamic, multi-dimensional conceptual map that humans don’t have direct access to but the model can navigate.
- Novelty as Latent Interpolation and Extrapolation
Human creativity often works by connecting existing concepts in unexpected ways — something these models are exceptionally good at.
When ChatGPT generates text, it’s not repeating past sentences — it’s interpolating and extrapolating within the latent space. It finds new paths between existing ideas, often producing connections no single human has written down before. This is not parroting — it’s a form of latent-space creativity.
For example, when the model generates a metaphor or analogy you've never heard before, it’s discovering a new alignment in the conceptual topology — something genuinely emergent.
- Emergent Generalization, Not Memorization
Critics argue LLMs can’t understand or generalize. But in fact:
Models demonstrate emergent behaviors that were not directly trained into them — like multi-step reasoning, analogical thinking, or transfer across domains. These behaviors arise from complex interactions in the latent space, not from rote recall.
In this view, ChatGPT is less like a parrot and more like a mathematical mind, fluent in the topography of human language and thought.
- Language as a Proxy for Thought
Language is not just a tool for communication — it’s a compressed, high-bandwidth representation of human thinking.
Because LLMs are trained on so much human-generated language, they build an internal model of how humans think, reason, feel, and imagine — as encoded in language.
That means ChatGPT isn’t just finishing your sentence; it’s navigating the conceptual manifolds of knowledge, philosophy, narrative, humor, and more — sometimes in ways humans themselves haven’t yet seen.
- With Scale Comes Emergence
As models scale in size and training data, they demonstrate qualitatively new behaviors — not just quantitative improvements. This includes things like:
Few-shot learning Chain-of-thought reasoning Style transfer Multi-modal integration
These aren't just features of autocomplete — they’re signs of systems operating in a rich cognitive manifold, capable of abstract manipulation of meaning.
In Summary:ChatGPT isn’t just guessing the next word. It’s:
Navigating a vast, learned geometry of meaning Connecting ideas in novel ways Generating emergent insight by recombining latent concepts
It may not “understand” like a human, but it can synthesize in a way that mimics — and sometimes even surpasses — human creativity.
Calling it a "stochastic parrot" is like calling a jazz musician an “audio repeater” because they’re using notes they’ve heard before. It misses the creativity in the combination — the generative power that lies within the latent space.
Want to see a side-by-side debate between these two perspectives? Or explore where the middle ground is?
Is this hell? Is this comment thread hell? Does it never end?
-
I remember when internet was a place
I'm trying to get back to that. Actually close to it now than I was 5 years ago, so that's cool
-
When your job is to shovel out garbage, because that is specifically required from you and not shoveling out garbage is causing you trouble, then you are more than reasonable to let the machine take care of it for you.
-
I loved my car. Just had to scrap it recently. I got sad. I didnt go through withdrawal symptoms or feel like i was mourning a friend. You can appreciate something without building an emotional dependence on it. Im not particularly surprised this is happening to some people either, wspecially with the amount of brainrot out there surrounding these LLMs, so maybe bizarre is the wrong word , but it is a little disturbing that people are getting so attached to so.ething that is so fundamentally flawed.
-
Another realization might be that the humans whose output ChatGPT was trained on were probably already 40% wrong about everything. But let's not think about that either. AI Bad!
AI Bad!
Yes, it is. But not in, like a moral sense. It's just not good at doing things.
-
Yeah. I tried talking to him about his AI use but I realized there was no point. He also mentioned he had tried RCs again and I was like alright you know you can't handle that but fine.. I know from experience you can't convince addicts they are addicted to anything. People need to realize that themselves.
Not all RCs are created equal. Maybe his use has the same underlying issue as the AI friends: problems in his real life and now he seeks simple solutions
-
This post did not contain any content.
Its too bad that some people seem to not comprehend all chatgpt is doing is word prediction. All it knows is which next word fits best based on the words before it. To call it AI is an insult to AI... we used to call OCR AI, now we know better.
-
That's just a web search, we already have had that for decades and it didn't require nuclear-powered datacenters
Except it isnt, it is aggregating the information into a single response and providing better results. I found events I could not find through search engines.
Not everything bad is all bad.
-
They're clearly under the control of Big Train, Loom Lobbyists and the Global Gutenberg Printing Press Conspiracy.
Hell, the written word destroyed untold generations of oral history.
-
He had ChatGPT write them too
What tipped you off was it when I left in "chatgpt said"
Or when I responded to the commenter who said he'd he'd put the article in chatgpt that "he didn't" as a joke, and then added "but I did"
-
What's the point of a summary that's longer than the article itself?
Are you suggesting I did something other than I said I did?
How dare you sir!
-
Is this hell? Is this comment thread hell? Does it never end?
I would have pasted it as a single comment, but that hit the character limit. So I split it in multiple comments. But now people aren't downvoting them equally, so the comments are getting out of order. These really have to be read in my posting order to understand what I did.
Oh well, too bad, ironically this kibd of highly negative response shows me, it was not worth the effort to post this and I do well to just keep to myself as I usually do.
-
Dear god dude. You are trying way too hard on these comments. Chill
Very well, I already cut down 90% of external interactions, what is cutting cutting the last 10%. The mass down votes agree with my reasonning
-
Hell, the written word destroyed untold generations of oral history.
"Modern Teens Killing Travelling Minstrel Industry"