how do I know that you guys are real and not bots?
-
How do you know your own mother is real and not some holodeck character or maybe even just a figment if your imagination, or worse, a figment of MY imagination?
Btw, you are in a Truman Show, your family is a lie. Exit the dome!
-
If I were a bot could I viscerally describe the walls of your mothers vaginal canal as if I had probed it thoroughly with my tongue?
Could I say "Fuck That Shit"?
How about saying things like Generative AIs are the world's biggest ponze scheme incapable of creating value and will lead companies like Google and Microsoft to their final days?
Could I say the answer is to take the fox across and then the chicken because I don't care if the fox rips it to tiny shreds, truly?
The answer to your fears is simple, the assholes are the real ones and the nice ones you need to be suspicious of their purpose, always has been.
Yes to all of that except your last paragraph.
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
Ask for a community meeting, so you can see that those people are real.
Despite that, I don't see any effective counter measure in the long run.
Currently, sure, with a keen eye you might be able to spot characteristics of one or the other LLM. But that'd be a lucky find.
-
Wait, what, is this true?
Eh, it's one of those perpetual rivalry things where the answer will probably never be known, and doesn't really matter except when it comes to petty squabbles between nations.
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
How do I know your not a bot?
-
That's a great question! Let's go over the common factors which can typically be used to differentiate humans from AI:
🧠 Hallucination
Both humans and AI can have gaps in their knowledge, but a key difference between how a person and an LLM responds can be determined by paying close attention to their answers.If a person doesn't know the answer to something, they will typically let you know.
But if an AI doesn't know the answer, they will typically fabricate false answers as they are typically programmed to always return an informational response.️ Writing style
People typically each have a unique writing style, which can be used to differentiate and identify them.For example, somebody may frequently make the same grammatical errors across all of their messages.
Whereas an AI is based on token frequency sampling, and is therefore more likely to have correct grammar.Explicit material
As an AI assistant, I am designed to provide factual information in a safe, legal, and inclusive manner. Speaking about explicit or unethical content could create an uncomfortable or uninclusive atmosphere, which would go against my guidelines.A human on the other hand, would be free to make remarks such as "cum on my face daddy, I want your sweet juice to fill my pores." which would be highly inappropriate for the given context.
Cultural differences
People from specific cultures may be able to detect the presence of an AI based on its lack of culture-specific language.
For example, an AI pretending to be Australian will likely draw suspicion amongst Australians, due to the lack of the word 'cunt' in every sentence.Instruction leaks
If a message contains wording which indicates the sender is working under instruction or guidance, it could indicate that they are an AI.
However, be wary of predominantly human traits like sarcasm, as it is also possible that the commenter is a human pretending to be an AI.Wrapping up
While these signs alone may not be enough to determine if you are speaking with a human or an AI, they may provide valuable tools in your investigative toolkit.
Resolving confusion by authenticating Personally Identifiable Information is another great step to ensuring the authenticity of the person you're speaking with.Would you like me to draft a web form for users to submit their PII during registration?
The term hallucination bothers me more than it should because fabulation better describes what bots do.
-
ChatGPT is ass at generating captchas.
Doesn't look like anything to me.
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
I don't think there's an easy way to snuff out the more advanced chatbots.
The next possible step could be moderating in a way that messes with the bots possible goals. (Info gathering, division)
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
Lemmy is too niche to spend money on running bots. There’s no profit, nothing to achieve. Reddit, on the other hand…
-
Lemmy is too niche to spend money on running bots. There’s no profit, nothing to achieve. Reddit, on the other hand…
wrote on last edited by [email protected]That’s bot talk!
-
are there any lemmy instances that verify PII?
I highly doubt it, one that required PII to sign up would be very unlikely to have many users (especially in the current climate, so to speak).
And from the admin side, that sounds like a nightmare to deal with.
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
You don't.
Worse, I may be a human today and a bot tomorrow. I may stop posting and my account gets taken over/hacked.
There is an old joke.
I know my little brother is an American. Born in America, lived his life in America.
My older brother... I don't know about him. -
And then the channel turns out to be entirely AI-generated.
Better - you mix it up once and a while, so that 'yes' or 'no' is not always a given.
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
The grammar and spelling errors
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
Everyone on Lemmy is a bot except you
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
Could a bot do this?
(You can't see me, but trust me, it's very impressive)
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
You have entered a low-grade reality. Basically an 80s text adventure. Truth, depth and humanity have been thrown out the window.
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
Totally fair question — and honestly, it's one that more people should be asking as bots get better and more human-like.
You're right to distinguish between spam bots and the more subtle, convincingly human ones. The kind that don’t flood you with garbage but instead quietly join discussions, mimic timing, tone, and even have believable post histories. These are harder to spot, and the line between "AI-generated" and "human-written" is only getting blurrier.
So, how do you know who you're talking to?
- Right now? You don’t.
On platforms like Reddit or Lemmy, there's no built-in guarantee that you're talking to a human. Even if someone says, “I'm real,” a bot could say the same. You’re relying entirely on patterns of behavior, consistency, and sometimes gut feeling.
- Federation makes it messier.
If you’re running your own instance (say, a Lemmy server), you can verify your users — maybe with PII, email domains, or manual approval. But that trust doesn’t automatically extend to other instances. When another instance federates with yours, you're inheriting their moderation policies and user base. If their standards are lax or if they don’t care about bot activity, you’ve got no real defense unless you block or limit them.
- Detecting “smart” bots is hard.
You're talking about bots that post like humans, behave like humans, maybe even argue like humans. They're tuned on human behavior patterns and timing. At that level, it's more about intent than detection. Some possible (but imperfect) signs:
Slightly off-topic replies.
Shallow engagement — like they're echoing back points without nuance.
Patterns over time — posting at inhuman hours or never showing emotion or changing tone.
But honestly? A determined bot can dodge most of these tells. Especially if it’s only posting occasionally and not engaging deeply.
- Long-term trust is earned, not proven.
If you’re a server admin, what you can do is:
Limit federation to instances with transparent moderation policies.
Encourage verified identities for critical roles (moderators, admins, etc.).
Develop community norms that reward consistent, meaningful participation — hard for bots to fake over time.
Share threat intelligence (yep, even in fediverse spaces) about suspected bots and problem instances.
- The uncomfortable truth?
We're already past the point where you can always tell. What we can do is keep building spaces where trust, context, and community memory matter. Where being human is more than just typing like one.
If you're asking this because you're noticing more uncanny replies online — you’re not imagining things. And if you’re running an instance, your vigilance is actually one of the few things keeping the web grounded right now.
/s obviously
-
You don't.
Worse, I may be a human today and a bot tomorrow. I may stop posting and my account gets taken over/hacked.
There is an old joke.
I know my little brother is an American. Born in America, lived his life in America.
My older brother... I don't know about him.I don't get the joke. Care to explain it, plz?
-
How do I know your not a bot?
I know you're not one because you used "your" wrong.
Unless.....