Is there any counter AI bots in the fediverse
-
That’s not entirely true. University assignments are scanned for signs of LLM use, and even with several thousand words per assignment, a not insignificant proportion comes back with an ‘undecided’ verdict.
With human post-processing it's definitely more complicated. Bots usually post fully automatic content, without human supervision and editing.
-
/r/SubSimGPT2Interactive/ does this.
Absolute banger subreddit
-
There is a big difference if your bot provides functionality that is good for the community and a bot that does only things that interests you.
People are asking you not to do this. So, if you want to do it, do it on your resources. I'm saying that as someone who set up almost 20 different instances (alien.top + the topic specific instances) just to have a place to run the mirroring bots.
The lllm bot is meant to provide the function of flagging llm bots that operate on the fediverse.
I'm listening to you people but I'm not getting good reasons as to why I should do it in a certain way or even to not do it.
Reason does take precedent over request. Y'all are strangers to me and none of you are actually attempting to hear me out as I doing for you.
-
The further get down this thread, the more you sound like a person I don’t want to deal with. And looking at the downvotes, I’m not the only one.
If you want people blocking you, perhaps followed by communities and instances blocking you as well, carry on.
That's fine if people don't want to deal with me I never interacted with them before this thread (most likely)
-
It is like I said. People on platforms like Reddit complain a lot about bots. This platform is kind of supposedto be the better version of that. Hence nit about the same negative dynamics. And I can still tell ChatGPT's uniquie style and a human apart. And once you go into detail, you'll notice the quirks or the intelligence of your conversational partner.
Reddit is different than fediverse. They work on different principles and I argue fediverse is very libertarian.
Is there anyway you can rule out survivorship bias? Plus I'm already doing preliminary stuff and I looking into making response shorter so that there's less information to go on and trying different models
-
Reddit is different than fediverse. They work on different principles and I argue fediverse is very libertarian.
Is there anyway you can rule out survivorship bias? Plus I'm already doing preliminary stuff and I looking into making response shorter so that there's less information to go on and trying different models
What kind of models are you planning to use? Some of the LLMs you run yourself? Or the usual ChatGPT/Grok/Claude?
-
The lllm bot is meant to provide the function of flagging llm bots that operate on the fediverse.
I'm listening to you people but I'm not getting good reasons as to why I should do it in a certain way or even to not do it.
Reason does take precedent over request. Y'all are strangers to me and none of you are actually attempting to hear me out as I doing for you.
You were implying not just that you wanted to detect bots, but that you wanted to to write your own set of bots that would pretend to be humans.
If your plan is only to write detection of bots, it's a whole different thing.
-
What kind of models are you planning to use? Some of the LLMs you run yourself? Or the usual ChatGPT/Grok/Claude?
So far I've experimented with ollama3.2 (I don't have enough ram for 3.3). Deepseek r1 7b( discovered that it's verbose and asks a lot of questions) and I'll try phi4 later. I could use the chat-gpt models since I have tokens. Ironically I'm thinking about making a genetic algorithm of prompt templates and a confidence check. It's oddly meta
-
You were implying not just that you wanted to detect bots, but that you wanted to to write your own set of bots that would pretend to be humans.
If your plan is only to write detection of bots, it's a whole different thing.
Actually I asked if there were any counter AI measures there probably is just no one knows what it is . In the post I'm referring too I explicitly said I'd make a human like bot and I'm saying the anti AI crowd should make a program to flag AI bots.
Now my project should help people that want to fight ai on the fediverse
-
Actually I asked if there were any counter AI measures there probably is just no one knows what it is . In the post I'm referring too I explicitly said I'd make a human like bot and I'm saying the anti AI crowd should make a program to flag AI bots.
Now my project should help people that want to fight ai on the fediverse
See, so now you are back to saying to your plan is to make a shitty thing and put the burden on those against it to come up with countermeasures. That's just lame.
-
See, so now you are back to saying to your plan is to make a shitty thing and put the burden on those against it to come up with countermeasures. That's just lame.
I'm not burdening them. If they don't want to take on the they project they don't take it.
-
I'm not burdening them. If they don't want to take on the they project they don't take it.
Ok, final message because I'm tired of this:
- you are openly admitting that you are going to piss on the well by adding a bot that pretends to be a human.
- you are openly admitting that you are going to do this without providing any form of mitigation.
- you are going to do this while pushing data to the whole network. No prior testing in a test instance, not even using your own instance for it.
- you think that is fine to leave the onus of "detecting" the bot to the others.
You are a complete idiot.
-
Ok, final message because I'm tired of this:
- you are openly admitting that you are going to piss on the well by adding a bot that pretends to be a human.
- you are openly admitting that you are going to do this without providing any form of mitigation.
- you are going to do this while pushing data to the whole network. No prior testing in a test instance, not even using your own instance for it.
- you think that is fine to leave the onus of "detecting" the bot to the others.
You are a complete idiot.
I've heard of poisoning the well, but I don't get what well I'm pissing into (no ones articulating anything). Yeah I admit I don't care much on the counter AI measures. Idk why you're essentially repeating what I'm saying. I intend to field test it (there's no articulable reason not to). There's no onus because no one has to make counter AI measures. It's their choice.
Apparently I'm an idiot for developing essential human like entity.
-
So far I've experimented with ollama3.2 (I don't have enough ram for 3.3). Deepseek r1 7b( discovered that it's verbose and asks a lot of questions) and I'll try phi4 later. I could use the chat-gpt models since I have tokens. Ironically I'm thinking about making a genetic algorithm of prompt templates and a confidence check. It's oddly meta
I often recommend Mistral-Nemo-Instruct I think that one strikes a good balance. But be careful with it, it's not censored. So given the right prompt, it might yell at people, talk about reproductive organs etc. All in all it's a job that takes some effort. You need a good model, come up with a good prompt. Maybe also give it a persona. And the entire framework to feed in the content, make decisions what to respond to. And if you want to do it right, and additional framework for safety and monitoring. I think that's the usual things for an AI bot.
-
I've heard of poisoning the well, but I don't get what well I'm pissing into (no ones articulating anything). Yeah I admit I don't care much on the counter AI measures. Idk why you're essentially repeating what I'm saying. I intend to field test it (there's no articulable reason not to). There's no onus because no one has to make counter AI measures. It's their choice.
Apparently I'm an idiot for developing essential human like entity.
I don’t get what well I’m pissing into
The well is the social graph itself. You are polluting the conversations by adding content that is not original nor desirable.
I’m an idiot for developing essential human like entity.
You are an idiot because you are pushing AI slop to people who are asking you not to, while thinking that you work in something groundbreaking.
-
I don’t get what well I’m pissing into
The well is the social graph itself. You are polluting the conversations by adding content that is not original nor desirable.
I’m an idiot for developing essential human like entity.
You are an idiot because you are pushing AI slop to people who are asking you not to, while thinking that you work in something groundbreaking.
No idea what a social graph is meant to be, but people do shit post and meme even if people don't desire it or if the meme is reference.
Of course idiot means anyone that uses Ai. I'm not portraying myself as groundbreaking even though I did make the fedi-plays genre
-
People do things for fun sometimes.
This is not the same as playing basketball. Unleashing AI bots "just for the fun of it" ends up effectively poisoning the well.
It sounds like red teaming to me.
-
You want to write software that subverts the expectations of users (who are coming here with the expectation they will be chatting with other people) and abusing resources provided by others who did not ask you to help you with any sort of LLM detection.
Have you never heard of red teaming?
-
No idea what that is an that subreddit is dead
There's active successors.
-
Have you never heard of red teaming?
Red teams are hired by the companies that are looking for vulnerabilities. If you don't get explicit approval by the target to look for exploits, you are just a hacker who can (and should) go to jail.