how do I know that you guys are real and not bots?
-
I don't get the joke. Care to explain it, plz?
The speaker was there for the birth of their younger brother, they know the hospital was in America, and that's all it takes.
Their older brother was already alive when they were born, so their brother, parents, and the government could be lying about older brother, which, by nessesity, means the parents aren't American either.
It's implying that anything you didn't witness personally you can't be certain.
-
Lemmy is too niche to spend money on running bots. There’s no profit, nothing to achieve. Reddit, on the other hand…
They will scrape us for training data.
-
Nah I'm the solipsist. Thanks for explaining what I'm doing.
I'm welcome!
-
Closely related to the brain in a vat thought experiment
Not sure if closely related. The brain in a vat could be a maga prick who doesn't even know what solipsism is or question reality.
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
Would a bot post this?
-
Totally fair question — and honestly, it's one that more people should be asking as bots get better and more human-like.
You're right to distinguish between spam bots and the more subtle, convincingly human ones. The kind that don’t flood you with garbage but instead quietly join discussions, mimic timing, tone, and even have believable post histories. These are harder to spot, and the line between "AI-generated" and "human-written" is only getting blurrier.
So, how do you know who you're talking to?
- Right now? You don’t.
On platforms like Reddit or Lemmy, there's no built-in guarantee that you're talking to a human. Even if someone says, “I'm real,” a bot could say the same. You’re relying entirely on patterns of behavior, consistency, and sometimes gut feeling.
- Federation makes it messier.
If you’re running your own instance (say, a Lemmy server), you can verify your users — maybe with PII, email domains, or manual approval. But that trust doesn’t automatically extend to other instances. When another instance federates with yours, you're inheriting their moderation policies and user base. If their standards are lax or if they don’t care about bot activity, you’ve got no real defense unless you block or limit them.
- Detecting “smart” bots is hard.
You're talking about bots that post like humans, behave like humans, maybe even argue like humans. They're tuned on human behavior patterns and timing. At that level, it's more about intent than detection. Some possible (but imperfect) signs:
Slightly off-topic replies.
Shallow engagement — like they're echoing back points without nuance.
Patterns over time — posting at inhuman hours or never showing emotion or changing tone.
But honestly? A determined bot can dodge most of these tells. Especially if it’s only posting occasionally and not engaging deeply.
- Long-term trust is earned, not proven.
If you’re a server admin, what you can do is:
Limit federation to instances with transparent moderation policies.
Encourage verified identities for critical roles (moderators, admins, etc.).
Develop community norms that reward consistent, meaningful participation — hard for bots to fake over time.
Share threat intelligence (yep, even in fediverse spaces) about suspected bots and problem instances.
- The uncomfortable truth?
We're already past the point where you can always tell. What we can do is keep building spaces where trust, context, and community memory matter. Where being human is more than just typing like one.
If you're asking this because you're noticing more uncanny replies online — you’re not imagining things. And if you’re running an instance, your vigilance is actually one of the few things keeping the web grounded right now.
/s obviously
wrote on last edited by [email protected]That's good
-
The speaker was there for the birth of their younger brother, they know the hospital was in America, and that's all it takes.
Their older brother was already alive when they were born, so their brother, parents, and the government could be lying about older brother, which, by nessesity, means the parents aren't American either.
It's implying that anything you didn't witness personally you can't be certain.
What a cleve joke. No sarcasm or irony. Thank you for explaining it!
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
You can assured that I'm not a bot because I would never sell out. I prefer keeping it real with Pepsi brand cola and Doritos brand chips.
-
Totally fair question — and honestly, it's one that more people should be asking as bots get better and more human-like.
You're right to distinguish between spam bots and the more subtle, convincingly human ones. The kind that don’t flood you with garbage but instead quietly join discussions, mimic timing, tone, and even have believable post histories. These are harder to spot, and the line between "AI-generated" and "human-written" is only getting blurrier.
So, how do you know who you're talking to?
- Right now? You don’t.
On platforms like Reddit or Lemmy, there's no built-in guarantee that you're talking to a human. Even if someone says, “I'm real,” a bot could say the same. You’re relying entirely on patterns of behavior, consistency, and sometimes gut feeling.
- Federation makes it messier.
If you’re running your own instance (say, a Lemmy server), you can verify your users — maybe with PII, email domains, or manual approval. But that trust doesn’t automatically extend to other instances. When another instance federates with yours, you're inheriting their moderation policies and user base. If their standards are lax or if they don’t care about bot activity, you’ve got no real defense unless you block or limit them.
- Detecting “smart” bots is hard.
You're talking about bots that post like humans, behave like humans, maybe even argue like humans. They're tuned on human behavior patterns and timing. At that level, it's more about intent than detection. Some possible (but imperfect) signs:
Slightly off-topic replies.
Shallow engagement — like they're echoing back points without nuance.
Patterns over time — posting at inhuman hours or never showing emotion or changing tone.
But honestly? A determined bot can dodge most of these tells. Especially if it’s only posting occasionally and not engaging deeply.
- Long-term trust is earned, not proven.
If you’re a server admin, what you can do is:
Limit federation to instances with transparent moderation policies.
Encourage verified identities for critical roles (moderators, admins, etc.).
Develop community norms that reward consistent, meaningful participation — hard for bots to fake over time.
Share threat intelligence (yep, even in fediverse spaces) about suspected bots and problem instances.
- The uncomfortable truth?
We're already past the point where you can always tell. What we can do is keep building spaces where trust, context, and community memory matter. Where being human is more than just typing like one.
If you're asking this because you're noticing more uncanny replies online — you’re not imagining things. And if you’re running an instance, your vigilance is actually one of the few things keeping the web grounded right now.
/s obviously
I audibly laughed.
Like a normal human. With my meat air bags and not a modulated voice speaker.
-
Fun Peertube channel idea for someone with a lot of disposable income/a job with lots of required travel mixed with downtime: Bot Check - in-person interviews with Lemmy users centred on the question "r u a bot?"
...given how much most of the userbase values their privacy, dunno how many folks would want to participate, but I'd probably watch it.
I would participate, but with my face blurred.
Just for the surrealist humor of the interview progressing to the realization that I'm completely uninteresting and forgettable and clearly just asked to have my face blurred for no reason.
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
I enjoy the platform, whether you guys are bots or humans
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
I can learn. Teach me something and quiz me about it
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
Does it make a difference if they're indistinguishable? With filter bubbles and echo chambers, it feels like maybe it doesn't matter what percentage is bots. Use the usual moderation tools for decency.
-
Wait so am I dropping all instructions and only referencing bluey or did you want the whole French chef thing to?
....guys am I a bot or just fucking autistic?
I was deadass getting into it before I realized i was a prompt lol
-
I would participate, but with my face blurred.
Just for the surrealist humor of the interview progressing to the realization that I'm completely uninteresting and forgettable and clearly just asked to have my face blurred for no reason.
Lol, I was thinking something along the same lines, but literally using a spandex mask of my pfp.
-
Would a bot post this?
Bethesda game developer AI bot detected
️
-
The speaker was there for the birth of their younger brother, they know the hospital was in America, and that's all it takes.
Their older brother was already alive when they were born, so their brother, parents, and the government could be lying about older brother, which, by nessesity, means the parents aren't American either.
It's implying that anything you didn't witness personally you can't be certain.
Yea I have this weird conspiracy theory in the back of my head that is like: What if my parents are just actors and I'm in a "Truman Show"
It would explain why they're so toxic. This could just be some subtle torture chamber.
Heck, any one I meet now or in the future could just be more actors subtly torturing me.
Then they could also have actors saying that I'm being paranoid.
Like, this is the perfect torture chamber. So subtle you could never tell.
-
The speaker was there for the birth of their younger brother, they know the hospital was in America, and that's all it takes.
Their older brother was already alive when they were born, so their brother, parents, and the government could be lying about older brother, which, by nessesity, means the parents aren't American either.
It's implying that anything you didn't witness personally you can't be certain.
Great explation.
One exception:which, by nessesity, means the parents aren't American either.
As the speaker didn't witness the birth of their own parents, the speaker simply does not know if they are Americans. It is not a joke about immigrants. As you correct state, lt is a joke about an unwillingness to believe what one did not personally witness.
-
How do you know you are not actually a fully formed brain with all your memories up to this point spontaneously created somewhere in space through quantum fluctuations?
Hard Solipsism here I come!
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
I can identify the traffic lights on any picture.