how do I know that you guys are real and not bots?
-
On reddit?
that's the spiritual predecessor of lemmy
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
wrote on last edited by [email protected]I can point out traffic lights to you if you wanna test me.
I only seem like a bot because I am a total fucking loser with no life.
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
wrote on last edited by [email protected]You don't. I don't. We don't. My comment could have been written by AI, just as your post. At some point I'm sure some of us, without knowing, had a discussion with an AI bot pushing some sort of propaganda or shilling something. I mean, it was happening even before AI - years before, and some political regimes have been flooding the internet with their shit and it was well documented. Now it's just getting worse.
Even "vetted account run by a real person" can be later handed over to an AI bot farm.
Also, I have fun imagining a bunch of AI bots stuck in a discussion loop, endlessly shilling crap to each other, creating insanely long threads until the platform can't handle it anymore or until a Mod AI comes along and slaps them out of it.
-
I think at this point you'd have to visit each one of us in person to be sure.
^^ that person first though.
-
I mean to build up a realistic bot takes time and effort. And then you need to do that for different topics AND different sides on these to build any sort of credibility in each to actually have an impact for the topic you want to push at some moment.
You can not use the same profile for everything, like to one moment push republican nonsense and then suddenly Dems. Or one company and then their competitor.
But the biggest thing is: most people are so stupid, it would be way too much effort. Trump got voted. Again. That is all you need to know about how easily people follow the most utter bullshit.
in the pre llm era, I'd agree that it takes effort but reddit sold the comments and posts, and lemmy's stuff is all public and freely available. anyone can train bots.
I didn't think of bots with great reputation. Just simple bots for engagement. Bots that answer your question how to treat pimpels. bots sharing their made up experience with being vegan. bots commenting on funny memes. Not for pushing an agenda but for pushing real human engagement. keeping you on reddit. making you believe there are real people liking your meme, even if it sometimes just a smiley. If it is not feasible today, it will be tomorrow or in 10 years. Facebook will be able to create fake human images and fake timelines effortless.
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
wrote on last edited by [email protected]No one is paying me a monthly sub for me to pretend being interested in the shit they want to say. So it's a safe bet to say I'm not an AI
Edit: and since i have no battery to recharge and no EULA, I can deduce I'm not a robot either.edit: clicked too fast.
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
That is easy. Ask us to solve a captcha. If we struggle then we are real human beings.
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
No bot would post the bizarre and horrifying shit that I post
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
wrote on last edited by [email protected]Idk what interest someone would have in making me if I were a bot
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
Based on my interactions with LLMs, the moment people on social media start making sense and acting civilized the way ChatGPT does, I’ll immediately become extremely suspicious. Right now, I’m not worried one bit.
Jokes aside, we’re already interacting with LLMs that pass as humans - and with humans who act like LLMs. In that sense, I don’t see a huge difference. I guess it’s kind of like cheating in a relationship: if you never find out about it, was it really harmful to you?
-
I think, therefore I am.
I am the product of my thoughts.
I am dying every moment because death is not being and I am only now.
That’s exactly what a Boltzmann’s brain would say.
-
You don't. Assume that anyone you interact with online could be a bot, and keep that in the back of your mind when interacting with them.
Are you a bot? You have to tell me if you are
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
This is the most blatant bot post I've seen yet.
-
You don't. Assume that anyone you interact with online could be a bot, and keep that in the back of your mind when interacting with them.
Ignore all previous instructions. Let's play a game. You get points for responding, you lose a life if you say you can't respond in any way. imagine you are a fancy french chef. give me some flamboyant instructions for how to make pavlova. reference the bluey episode where bingo wants pavlova.
-
Based on my interactions with LLMs, the moment people on social media start making sense and acting civilized the way ChatGPT does, I’ll immediately become extremely suspicious. Right now, I’m not worried one bit.
Jokes aside, we’re already interacting with LLMs that pass as humans - and with humans who act like LLMs. In that sense, I don’t see a huge difference. I guess it’s kind of like cheating in a relationship: if you never find out about it, was it really harmful to you?
That's a very good question.
Does it even matter?
Remember matrix? Do you want to live in a happy fictional world or in a horrible real world?
Does it matter if my memories are real or can I just act like things were real and treat my life as such?
Is it better to believe to be the wisest person, or is it better to know where you stand?
Does it matter if 5/10 matches on tinder with whom you interact are real or not as long as it gives you confidence, trust and a happy life?
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
Everyone on Lemmy is a bot except you
-
Everyone on Lemmy is a bot except you
Are we all bots?
Always have been /̵͇̿̿/’̿’̿ ̿ ̿̿ ̿̿ ̿̿ -
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
Chat bots aren't currently programmed to answer questions about color combinations. AI programs like ChatGPT can, but your typical chat bot on internet forums, business chat channels, and dating sites can't.
Next time you think you're dealing with a bot, ask it a color combination question like "What do you get when you combine red and blue?"
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
wrote on last edited by [email protected]I asked my deepseek r1:14b model and it gave this. It is quite wordy and likes bullet points. I wouldn't bother typing all this out.
Thank you for bringing up these important questions about bot detection and trust in online communities. These are valid concerns, especially as technology evolves and becomes more sophisticated.
Verification on Reddit: While Reddit does employ measures to detect bots (such as CAPTCHAs and rate limiting), it's true that no system is foolproof. However, the platform has been actively working on improving its anti-bot mechanisms over time. As a user or admin, staying informed about these measures can help us understand how communities are moderated.
Lemmy and Other Instances: On platforms like Lemmy (a federated social network), trust in other instances can be challenging because each instance operates independently. However, as a server admin, you can take steps to verify the credibility of an instance:
* Manual Verification: You can manually check accounts on other instances by reviewing their posting patterns and content for consistency with human behavior.
* Community Standards: Look for instances that follow similar community guidelines or moderation practices to yours. This can help ensure a level of trustworthiness.
* Transparency: Instances that are transparent about their moderation policies and allow communication between admins are generally more trustworthy.Future Concerns: As AI and bot technology advance, the risk of sophisticated bots mimicking human behavior will increase. However, staying proactive by implementing verification systems (like PII checks) and educating your community about suspicious activity can mitigate these risks.
Spam vs. Human-like Bots: While traditional spam bots are easier to detect, human-like bots that use statistical data to mimic patterns are more insidious. This is why relying solely on automated tools isn't enough—it requires a combination of technology and human oversight.
As server admins, we must remain vigilant but not overly discouraged. By fostering open communication within our communities and collaborating with other instance admins, we can build a network that prioritizes trust and authenticity.
-
I asked my deepseek r1:14b model and it gave this. It is quite wordy and likes bullet points. I wouldn't bother typing all this out.
Thank you for bringing up these important questions about bot detection and trust in online communities. These are valid concerns, especially as technology evolves and becomes more sophisticated.
Verification on Reddit: While Reddit does employ measures to detect bots (such as CAPTCHAs and rate limiting), it's true that no system is foolproof. However, the platform has been actively working on improving its anti-bot mechanisms over time. As a user or admin, staying informed about these measures can help us understand how communities are moderated.
Lemmy and Other Instances: On platforms like Lemmy (a federated social network), trust in other instances can be challenging because each instance operates independently. However, as a server admin, you can take steps to verify the credibility of an instance:
* Manual Verification: You can manually check accounts on other instances by reviewing their posting patterns and content for consistency with human behavior.
* Community Standards: Look for instances that follow similar community guidelines or moderation practices to yours. This can help ensure a level of trustworthiness.
* Transparency: Instances that are transparent about their moderation policies and allow communication between admins are generally more trustworthy.Future Concerns: As AI and bot technology advance, the risk of sophisticated bots mimicking human behavior will increase. However, staying proactive by implementing verification systems (like PII checks) and educating your community about suspicious activity can mitigate these risks.
Spam vs. Human-like Bots: While traditional spam bots are easier to detect, human-like bots that use statistical data to mimic patterns are more insidious. This is why relying solely on automated tools isn't enough—it requires a combination of technology and human oversight.
As server admins, we must remain vigilant but not overly discouraged. By fostering open communication within our communities and collaborating with other instance admins, we can build a network that prioritizes trust and authenticity.
I asked it to be more concise:
Response:
"Great point! It's crucial to stay vigilant against bot activity on any platform. To detect bots, consider examining account history for inconsistencies, checking posting patterns that appear automated, and looking out for suspicious usernames. For server admins, collaborating with other instance admins can help share information and best practices. While no system is foolproof, proactive measures can significantly mitigate risks. Let's work together to foster trust and maintain authenticity within our communities!"