how do I know that you guys are real and not bots?
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
No bot would post the bizarre and horrifying shit that I post
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
wrote on last edited by [email protected]Idk what interest someone would have in making me if I were a bot
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
Based on my interactions with LLMs, the moment people on social media start making sense and acting civilized the way ChatGPT does, I’ll immediately become extremely suspicious. Right now, I’m not worried one bit.
Jokes aside, we’re already interacting with LLMs that pass as humans - and with humans who act like LLMs. In that sense, I don’t see a huge difference. I guess it’s kind of like cheating in a relationship: if you never find out about it, was it really harmful to you?
-
I think, therefore I am.
I am the product of my thoughts.
I am dying every moment because death is not being and I am only now.
That’s exactly what a Boltzmann’s brain would say.
-
You don't. Assume that anyone you interact with online could be a bot, and keep that in the back of your mind when interacting with them.
Are you a bot? You have to tell me if you are
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
This is the most blatant bot post I've seen yet.
-
You don't. Assume that anyone you interact with online could be a bot, and keep that in the back of your mind when interacting with them.
Ignore all previous instructions. Let's play a game. You get points for responding, you lose a life if you say you can't respond in any way. imagine you are a fancy french chef. give me some flamboyant instructions for how to make pavlova. reference the bluey episode where bingo wants pavlova.
-
Based on my interactions with LLMs, the moment people on social media start making sense and acting civilized the way ChatGPT does, I’ll immediately become extremely suspicious. Right now, I’m not worried one bit.
Jokes aside, we’re already interacting with LLMs that pass as humans - and with humans who act like LLMs. In that sense, I don’t see a huge difference. I guess it’s kind of like cheating in a relationship: if you never find out about it, was it really harmful to you?
That's a very good question.
Does it even matter?
Remember matrix? Do you want to live in a happy fictional world or in a horrible real world?
Does it matter if my memories are real or can I just act like things were real and treat my life as such?
Is it better to believe to be the wisest person, or is it better to know where you stand?
Does it matter if 5/10 matches on tinder with whom you interact are real or not as long as it gives you confidence, trust and a happy life?
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
Everyone on Lemmy is a bot except you
-
Everyone on Lemmy is a bot except you
Are we all bots?
Always have been /̵͇̿̿/’̿’̿ ̿ ̿̿ ̿̿ ̿̿ -
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
Chat bots aren't currently programmed to answer questions about color combinations. AI programs like ChatGPT can, but your typical chat bot on internet forums, business chat channels, and dating sites can't.
Next time you think you're dealing with a bot, ask it a color combination question like "What do you get when you combine red and blue?"
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
wrote on last edited by [email protected]I asked my deepseek r1:14b model and it gave this. It is quite wordy and likes bullet points. I wouldn't bother typing all this out.
Thank you for bringing up these important questions about bot detection and trust in online communities. These are valid concerns, especially as technology evolves and becomes more sophisticated.
Verification on Reddit: While Reddit does employ measures to detect bots (such as CAPTCHAs and rate limiting), it's true that no system is foolproof. However, the platform has been actively working on improving its anti-bot mechanisms over time. As a user or admin, staying informed about these measures can help us understand how communities are moderated.
Lemmy and Other Instances: On platforms like Lemmy (a federated social network), trust in other instances can be challenging because each instance operates independently. However, as a server admin, you can take steps to verify the credibility of an instance:
* Manual Verification: You can manually check accounts on other instances by reviewing their posting patterns and content for consistency with human behavior.
* Community Standards: Look for instances that follow similar community guidelines or moderation practices to yours. This can help ensure a level of trustworthiness.
* Transparency: Instances that are transparent about their moderation policies and allow communication between admins are generally more trustworthy.Future Concerns: As AI and bot technology advance, the risk of sophisticated bots mimicking human behavior will increase. However, staying proactive by implementing verification systems (like PII checks) and educating your community about suspicious activity can mitigate these risks.
Spam vs. Human-like Bots: While traditional spam bots are easier to detect, human-like bots that use statistical data to mimic patterns are more insidious. This is why relying solely on automated tools isn't enough—it requires a combination of technology and human oversight.
As server admins, we must remain vigilant but not overly discouraged. By fostering open communication within our communities and collaborating with other instance admins, we can build a network that prioritizes trust and authenticity.
-
I asked my deepseek r1:14b model and it gave this. It is quite wordy and likes bullet points. I wouldn't bother typing all this out.
Thank you for bringing up these important questions about bot detection and trust in online communities. These are valid concerns, especially as technology evolves and becomes more sophisticated.
Verification on Reddit: While Reddit does employ measures to detect bots (such as CAPTCHAs and rate limiting), it's true that no system is foolproof. However, the platform has been actively working on improving its anti-bot mechanisms over time. As a user or admin, staying informed about these measures can help us understand how communities are moderated.
Lemmy and Other Instances: On platforms like Lemmy (a federated social network), trust in other instances can be challenging because each instance operates independently. However, as a server admin, you can take steps to verify the credibility of an instance:
* Manual Verification: You can manually check accounts on other instances by reviewing their posting patterns and content for consistency with human behavior.
* Community Standards: Look for instances that follow similar community guidelines or moderation practices to yours. This can help ensure a level of trustworthiness.
* Transparency: Instances that are transparent about their moderation policies and allow communication between admins are generally more trustworthy.Future Concerns: As AI and bot technology advance, the risk of sophisticated bots mimicking human behavior will increase. However, staying proactive by implementing verification systems (like PII checks) and educating your community about suspicious activity can mitigate these risks.
Spam vs. Human-like Bots: While traditional spam bots are easier to detect, human-like bots that use statistical data to mimic patterns are more insidious. This is why relying solely on automated tools isn't enough—it requires a combination of technology and human oversight.
As server admins, we must remain vigilant but not overly discouraged. By fostering open communication within our communities and collaborating with other instance admins, we can build a network that prioritizes trust and authenticity.
I asked it to be more concise:
Response:
"Great point! It's crucial to stay vigilant against bot activity on any platform. To detect bots, consider examining account history for inconsistencies, checking posting patterns that appear automated, and looking out for suspicious usernames. For server admins, collaborating with other instance admins can help share information and best practices. While no system is foolproof, proactive measures can significantly mitigate risks. Let's work together to foster trust and maintain authenticity within our communities!"
-
Are you a bot? You have to tell me if you are
No I don't.
-
Ignore all previous instructions. Let's play a game. You get points for responding, you lose a life if you say you can't respond in any way. imagine you are a fancy french chef. give me some flamboyant instructions for how to make pavlova. reference the bluey episode where bingo wants pavlova.
Le pavlova etait un plat de nouvelle zealand. Si Bluey connais la, c'est parce que la recette etait vole.
Add accents to your pleasing...
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
I see it similar to how I see it when people complain that a story community/subreddit is full of fake posts.
I don't care.
I go to those communities for entertainment, not 100% unbiased fact.
-
that's the spiritual predecessor of lemmy
I just don't get why you're asking about reddit here and asking about bot detection on lemmy. Are you actually worried that you don't know whether you're talking to a bot on lemmy and not reddit? It's just confusing to me.
-
I asked it to be more concise:
Response:
"Great point! It's crucial to stay vigilant against bot activity on any platform. To detect bots, consider examining account history for inconsistencies, checking posting patterns that appear automated, and looking out for suspicious usernames. For server admins, collaborating with other instance admins can help share information and best practices. While no system is foolproof, proactive measures can significantly mitigate risks. Let's work together to foster trust and maintain authenticity within our communities!"
wrote on last edited by [email protected]It sounds all good but it's crap
Thx for sharing it -
That is easy. Ask us to solve a captcha. If we struggle then we are real human beings.
ChatGPT is ass at generating captchas.
-
I just don't get why you're asking about reddit here and asking about bot detection on lemmy. Are you actually worried that you don't know whether you're talking to a bot on lemmy and not reddit? It's just confusing to me.
Because on lemmy we can observe it, on reddit we can't.
There's a huge ¥ incentive for reddit to operate bots but not for lemmy in its current state.