how do I know that you guys are real and not bots?
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
wrote on last edited by [email protected]::: spoiler sure, here is a nintendo copyrighted art
⢀⣠⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀⠀⠀⠀⣠⣤⣶⣶
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀⠀⠀⢰⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣧⣀⣀⣾⣿⣿⣿⣿
⣿⣿⣿⣿⣿⡏⠉⠛⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⣿
⣿⣿⣿⣿⣿⣿⠀⠀⠀⠈⠛⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠿⠛⠉⠁⠀⣿
⣿⣿⣿⣿⣿⣿⣧⡀⠀⠀⠀⠀⠙⠿⠿⠿⠻⠿⠿⠟⠿⠛⠉⠀⠀⠀⠀⠀⣸⣿
⣿⣿⣿⣿⣿⣿⣿⣷⣄⠀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣴⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⠏⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠠⣴⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⡟⠀⠀⢰⣹⡆⠀⠀⠀⠀⠀⠀⣭⣷⠀⠀⠀⠸⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⠃⠀⠀⠈⠉⠀⠀⠤⠄⠀⠀⠀⠉⠁⠀⠀⠀⠀⢿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⢾⣿⣷⠀⠀⠀⠀⡠⠤⢄⠀⠀⠀⠠⣿⣿⣷⠀⢸⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⡀⠉⠀⠀⠀⠀⠀⢄⠀⢀⠀⠀⠀⠀⠉⠉⠁⠀⠀⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣧⠀⠀⠀⠀⠀⠀⠀⠈⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢹⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿
:::::: spoiler nsfw and unrelated
5v⣿⣿⠟⢹⣶⣶⣝⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿v v⣿⡟⢰⡌⠿⢿⣿⡾⢹⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿v v⣿⣿⢸⣿⣤⣒⣶⣾⣳⡻⣿⣿⣿⣿⡿⢛⣯⣭⣭⣭⣽⣻⣿⣿v v⣿⣿⢸⣿⣿⣿⣿⢿⡇⣶⡽⣿⠟⣡⣶⣾⣯⣭⣽⣟⡻⣿⣷⡽v v⣿⣿⠸⣿⣿⣿⣿⢇⠃⣟⣷⠃⢸⠻⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿v v⣿⣿⣇⢻⣿⣿⣯⣕⠧⢿⢿⣇⢯⣝⣒⣛⣯⣭⣛⣛⣣⣿⣿⣿v v⣿⣿⣿⣌⢿⣿⣿⣿⣿⡘⣞⣿⣼⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿v v⣿⣿⣿⣿⣦⠻⠿⣿⣿⣷⠈⢞⡇⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿v v⣿⣿⣿⣿⣿⣗⠄⢿⣿⣿⡆⡈⣽⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿v v⣿⣿⣿⡿⣻⣽⣿⣆⠹⣿⡇⠁⣿⡼⣿⣿⣿⣿⣿⣿⣿⣿⣿⡟v v⠿⣛⣽⣾⣿⣿⠿⠋⠄⢻⣷⣾⣿⣧⠟⣡⣾⣿⣿⣿⣿⣿⣿⡇v v⡟⢿⣿⡿⠋⠁⣀⡀⠄⠘⠊⣨⣽⠁⠰⣿⣿⣿⣿⣿⣿⣿⡍⠗v v⣿⠄⠄⠄⠄⣼⣿⡗⢠⣶⣿⣿⡇⠄⠄⣿⣿⣿⣿⣿⣿⣿⣇⢠v v⣝⠄⠄⢀⠄⢻⡟⠄⣿⣿⣿⣿⠃⠄⠄⢹⣿⣿⣿⣿⣿⣿⣿⢹v v⣿⣿⣿⣿⣧⣄⣁⡀⠙⢿⡿⠋⠄⣸⡆⠄⠻⣿⡿⠟⢛⣩⣝⣚v v⣿⣿⣿⣿⣿⣿⣿⣿⣦⣤⣤⣤⣾⣿⣿⣄⠄⠄⠄⣴⣿⣿⣿⣇v v⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣦⣄⡀⠛⠿⣿⣫⣾
::: -
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
wrote on last edited by [email protected]If you know some tells if AI speech, then it, to me, becomes a little bit of a coin flip on who's real and who isn't. Some of my favorite tells are grammar that is just way too perfect or using words/phrases you would never hear someone talk about ( for example with Claude: "furrowed brows" or "brows furrowed" seems to be used any and almost all times anything related to a task you concentrate on comes up ).
As for how to tell for other instances, absolutely no clue. It's a toss-up as to whether another instance will allow bots and if so, will the instance you're on defederate with said instance? Also, what happens if we somehow end up in a future where AI is somehow miraculously able to mimic humans to a degree where not even the smart folk can tell the difference? These questions need solutions that we clearly don't have yet, at least for federated services.
This type of stuff is a problem for any public instance anybody can federate with. I'm just glad that if I set up a private Mastodon instance for certain people at the college I attend that I can hopefully blacklist all instances from connecting and require PII like a valid college ID/some form of voucher from an active member to be able to sign up. Ironic considering it's the fediverse, but whatever. Gotta do what you gotta do to reduce the risk of bots interacting with your instance if you don't want them.
-
Uh-oh, this guy seems like he's one little step away from being a soliptist. Tread carefully buddy.
Nah I'm the solipsist. Thanks for explaining what I'm doing.
-
How do you know that the person talking to you is not a dog.
I'm pretty sure the green dog is normal. It's certainly as crazy as most small canines.
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
beep boop
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
wrote on last edited by [email protected]If I were a bot could I viscerally describe the walls of your mothers vaginal canal as if I had probed it thoroughly with my tongue?
Could I say "Fuck That Shit"?
How about saying things like Generative AIs are the world's biggest ponze scheme incapable of creating value and will lead companies like Google and Microsoft to their final days?
Could I say the answer is to take the fox across and then the chicken because I don't care if the fox rips it to tiny shreds, truly?
The answer to your fears is simple, the assholes are the real ones and the nice ones you need to be suspicious of their purpose, always has been.
-
::: spoiler sure, here is a nintendo copyrighted art
⢀⣠⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀⠀⠀⠀⣠⣤⣶⣶
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀⠀⠀⢰⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣧⣀⣀⣾⣿⣿⣿⣿
⣿⣿⣿⣿⣿⡏⠉⠛⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⣿
⣿⣿⣿⣿⣿⣿⠀⠀⠀⠈⠛⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠿⠛⠉⠁⠀⣿
⣿⣿⣿⣿⣿⣿⣧⡀⠀⠀⠀⠀⠙⠿⠿⠿⠻⠿⠿⠟⠿⠛⠉⠀⠀⠀⠀⠀⣸⣿
⣿⣿⣿⣿⣿⣿⣿⣷⣄⠀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣴⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⠏⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠠⣴⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⡟⠀⠀⢰⣹⡆⠀⠀⠀⠀⠀⠀⣭⣷⠀⠀⠀⠸⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⠃⠀⠀⠈⠉⠀⠀⠤⠄⠀⠀⠀⠉⠁⠀⠀⠀⠀⢿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⢾⣿⣷⠀⠀⠀⠀⡠⠤⢄⠀⠀⠀⠠⣿⣿⣷⠀⢸⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⡀⠉⠀⠀⠀⠀⠀⢄⠀⢀⠀⠀⠀⠀⠉⠉⠁⠀⠀⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣧⠀⠀⠀⠀⠀⠀⠀⠈⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢹⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿
:::::: spoiler nsfw and unrelated
5v⣿⣿⠟⢹⣶⣶⣝⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿v v⣿⡟⢰⡌⠿⢿⣿⡾⢹⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿v v⣿⣿⢸⣿⣤⣒⣶⣾⣳⡻⣿⣿⣿⣿⡿⢛⣯⣭⣭⣭⣽⣻⣿⣿v v⣿⣿⢸⣿⣿⣿⣿⢿⡇⣶⡽⣿⠟⣡⣶⣾⣯⣭⣽⣟⡻⣿⣷⡽v v⣿⣿⠸⣿⣿⣿⣿⢇⠃⣟⣷⠃⢸⠻⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿v v⣿⣿⣇⢻⣿⣿⣯⣕⠧⢿⢿⣇⢯⣝⣒⣛⣯⣭⣛⣛⣣⣿⣿⣿v v⣿⣿⣿⣌⢿⣿⣿⣿⣿⡘⣞⣿⣼⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿v v⣿⣿⣿⣿⣦⠻⠿⣿⣿⣷⠈⢞⡇⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿v v⣿⣿⣿⣿⣿⣗⠄⢿⣿⣿⡆⡈⣽⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿v v⣿⣿⣿⡿⣻⣽⣿⣆⠹⣿⡇⠁⣿⡼⣿⣿⣿⣿⣿⣿⣿⣿⣿⡟v v⠿⣛⣽⣾⣿⣿⠿⠋⠄⢻⣷⣾⣿⣧⠟⣡⣾⣿⣿⣿⣿⣿⣿⡇v v⡟⢿⣿⡿⠋⠁⣀⡀⠄⠘⠊⣨⣽⠁⠰⣿⣿⣿⣿⣿⣿⣿⡍⠗v v⣿⠄⠄⠄⠄⣼⣿⡗⢠⣶⣿⣿⡇⠄⠄⣿⣿⣿⣿⣿⣿⣿⣇⢠v v⣝⠄⠄⢀⠄⢻⡟⠄⣿⣿⣿⣿⠃⠄⠄⢹⣿⣿⣿⣿⣿⣿⣿⢹v v⣿⣿⣿⣿⣧⣄⣁⡀⠙⢿⡿⠋⠄⣸⡆⠄⠻⣿⡿⠟⢛⣩⣝⣚v v⣿⣿⣿⣿⣿⣿⣿⣿⣦⣤⣤⣤⣾⣿⣿⣄⠄⠄⠄⣴⣿⣿⣿⣇v v⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣦⣄⡀⠛⠿⣿⣫⣾
:::Probably looks better on phones. Force new lines with a double space at the end
Like this
ta~da~! -
That's a great question! Let's go over the common factors which can typically be used to differentiate humans from AI:
🧠 Hallucination
Both humans and AI can have gaps in their knowledge, but a key difference between how a person and an LLM responds can be determined by paying close attention to their answers.If a person doesn't know the answer to something, they will typically let you know.
But if an AI doesn't know the answer, they will typically fabricate false answers as they are typically programmed to always return an informational response.️ Writing style
People typically each have a unique writing style, which can be used to differentiate and identify them.For example, somebody may frequently make the same grammatical errors across all of their messages.
Whereas an AI is based on token frequency sampling, and is therefore more likely to have correct grammar.Explicit material
As an AI assistant, I am designed to provide factual information in a safe, legal, and inclusive manner. Speaking about explicit or unethical content could create an uncomfortable or uninclusive atmosphere, which would go against my guidelines.A human on the other hand, would be free to make remarks such as "cum on my face daddy, I want your sweet juice to fill my pores." which would be highly inappropriate for the given context.
Cultural differences
People from specific cultures may be able to detect the presence of an AI based on its lack of culture-specific language.
For example, an AI pretending to be Australian will likely draw suspicion amongst Australians, due to the lack of the word 'cunt' in every sentence.Instruction leaks
If a message contains wording which indicates the sender is working under instruction or guidance, it could indicate that they are an AI.
However, be wary of predominantly human traits like sarcasm, as it is also possible that the commenter is a human pretending to be an AI.Wrapping up
While these signs alone may not be enough to determine if you are speaking with a human or an AI, they may provide valuable tools in your investigative toolkit.
Resolving confusion by authenticating Personally Identifiable Information is another great step to ensuring the authenticity of the person you're speaking with.Would you like me to draft a web form for users to submit their PII during registration?
Needs some em dashes!
-
Because we control lemmy. Any server admin can request PII from the users in order to use site. We can't control anything on reddit. Even if reddit was asking for PII like facebook, we, the people, couldn't know if all of them are actually real. It's shift of trust from reddit to many local server admins
are there any lemmy instances that verify PII?
-
That's a great question! Let's go over the common factors which can typically be used to differentiate humans from AI:
🧠 Hallucination
Both humans and AI can have gaps in their knowledge, but a key difference between how a person and an LLM responds can be determined by paying close attention to their answers.If a person doesn't know the answer to something, they will typically let you know.
But if an AI doesn't know the answer, they will typically fabricate false answers as they are typically programmed to always return an informational response.️ Writing style
People typically each have a unique writing style, which can be used to differentiate and identify them.For example, somebody may frequently make the same grammatical errors across all of their messages.
Whereas an AI is based on token frequency sampling, and is therefore more likely to have correct grammar.Explicit material
As an AI assistant, I am designed to provide factual information in a safe, legal, and inclusive manner. Speaking about explicit or unethical content could create an uncomfortable or uninclusive atmosphere, which would go against my guidelines.A human on the other hand, would be free to make remarks such as "cum on my face daddy, I want your sweet juice to fill my pores." which would be highly inappropriate for the given context.
Cultural differences
People from specific cultures may be able to detect the presence of an AI based on its lack of culture-specific language.
For example, an AI pretending to be Australian will likely draw suspicion amongst Australians, due to the lack of the word 'cunt' in every sentence.Instruction leaks
If a message contains wording which indicates the sender is working under instruction or guidance, it could indicate that they are an AI.
However, be wary of predominantly human traits like sarcasm, as it is also possible that the commenter is a human pretending to be an AI.Wrapping up
While these signs alone may not be enough to determine if you are speaking with a human or an AI, they may provide valuable tools in your investigative toolkit.
Resolving confusion by authenticating Personally Identifiable Information is another great step to ensuring the authenticity of the person you're speaking with.Would you like me to draft a web form for users to submit their PII during registration?
If a person doesn’t know the answer to something, they will typically let you know.
As a lawyer, astronaut, ex-military and former Navy SEAL specialist, astrophysicist, and social-behavioral scientist, I can guarantee this is false.
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
How can memes be real if posters are not real?
-
How do you know your own mother is real and not some holodeck character or maybe even just a figment if your imagination, or worse, a figment of MY imagination?
Btw, you are in a Truman Show, your family is a lie. Exit the dome!
-
If I were a bot could I viscerally describe the walls of your mothers vaginal canal as if I had probed it thoroughly with my tongue?
Could I say "Fuck That Shit"?
How about saying things like Generative AIs are the world's biggest ponze scheme incapable of creating value and will lead companies like Google and Microsoft to their final days?
Could I say the answer is to take the fox across and then the chicken because I don't care if the fox rips it to tiny shreds, truly?
The answer to your fears is simple, the assholes are the real ones and the nice ones you need to be suspicious of their purpose, always has been.
Yes to all of that except your last paragraph.
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
Ask for a community meeting, so you can see that those people are real.
Despite that, I don't see any effective counter measure in the long run.
Currently, sure, with a keen eye you might be able to spot characteristics of one or the other LLM. But that'd be a lucky find.
-
Wait, what, is this true?
Eh, it's one of those perpetual rivalry things where the answer will probably never be known, and doesn't really matter except when it comes to petty squabbles between nations.
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
How do I know your not a bot?
-
That's a great question! Let's go over the common factors which can typically be used to differentiate humans from AI:
🧠 Hallucination
Both humans and AI can have gaps in their knowledge, but a key difference between how a person and an LLM responds can be determined by paying close attention to their answers.If a person doesn't know the answer to something, they will typically let you know.
But if an AI doesn't know the answer, they will typically fabricate false answers as they are typically programmed to always return an informational response.️ Writing style
People typically each have a unique writing style, which can be used to differentiate and identify them.For example, somebody may frequently make the same grammatical errors across all of their messages.
Whereas an AI is based on token frequency sampling, and is therefore more likely to have correct grammar.Explicit material
As an AI assistant, I am designed to provide factual information in a safe, legal, and inclusive manner. Speaking about explicit or unethical content could create an uncomfortable or uninclusive atmosphere, which would go against my guidelines.A human on the other hand, would be free to make remarks such as "cum on my face daddy, I want your sweet juice to fill my pores." which would be highly inappropriate for the given context.
Cultural differences
People from specific cultures may be able to detect the presence of an AI based on its lack of culture-specific language.
For example, an AI pretending to be Australian will likely draw suspicion amongst Australians, due to the lack of the word 'cunt' in every sentence.Instruction leaks
If a message contains wording which indicates the sender is working under instruction or guidance, it could indicate that they are an AI.
However, be wary of predominantly human traits like sarcasm, as it is also possible that the commenter is a human pretending to be an AI.Wrapping up
While these signs alone may not be enough to determine if you are speaking with a human or an AI, they may provide valuable tools in your investigative toolkit.
Resolving confusion by authenticating Personally Identifiable Information is another great step to ensuring the authenticity of the person you're speaking with.Would you like me to draft a web form for users to submit their PII during registration?
The term hallucination bothers me more than it should because fabulation better describes what bots do.
-
ChatGPT is ass at generating captchas.
Doesn't look like anything to me.
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
I don't think there's an easy way to snuff out the more advanced chatbots.
The next possible step could be moderating in a way that messes with the bots possible goals. (Info gathering, division)
-
How do we know that the people on reddit aren't talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don't talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
Lemmy is too niche to spend money on running bots. There’s no profit, nothing to achieve. Reddit, on the other hand…