Is there any counter AI bots in the fediverse
-
I'm not burdening them. If they don't want to take on the they project they don't take it.
Ok, final message because I'm tired of this:
- you are openly admitting that you are going to piss on the well by adding a bot that pretends to be a human.
- you are openly admitting that you are going to do this without providing any form of mitigation.
- you are going to do this while pushing data to the whole network. No prior testing in a test instance, not even using your own instance for it.
- you think that is fine to leave the onus of "detecting" the bot to the others.
You are a complete idiot.
-
Ok, final message because I'm tired of this:
- you are openly admitting that you are going to piss on the well by adding a bot that pretends to be a human.
- you are openly admitting that you are going to do this without providing any form of mitigation.
- you are going to do this while pushing data to the whole network. No prior testing in a test instance, not even using your own instance for it.
- you think that is fine to leave the onus of "detecting" the bot to the others.
You are a complete idiot.
I've heard of poisoning the well, but I don't get what well I'm pissing into (no ones articulating anything). Yeah I admit I don't care much on the counter AI measures. Idk why you're essentially repeating what I'm saying. I intend to field test it (there's no articulable reason not to). There's no onus because no one has to make counter AI measures. It's their choice.
Apparently I'm an idiot for developing essential human like entity.
-
So far I've experimented with ollama3.2 (I don't have enough ram for 3.3). Deepseek r1 7b( discovered that it's verbose and asks a lot of questions) and I'll try phi4 later. I could use the chat-gpt models since I have tokens. Ironically I'm thinking about making a genetic algorithm of prompt templates and a confidence check. It's oddly meta
I often recommend Mistral-Nemo-Instruct I think that one strikes a good balance. But be careful with it, it's not censored. So given the right prompt, it might yell at people, talk about reproductive organs etc. All in all it's a job that takes some effort. You need a good model, come up with a good prompt. Maybe also give it a persona. And the entire framework to feed in the content, make decisions what to respond to. And if you want to do it right, and additional framework for safety and monitoring. I think that's the usual things for an AI bot.
-
I've heard of poisoning the well, but I don't get what well I'm pissing into (no ones articulating anything). Yeah I admit I don't care much on the counter AI measures. Idk why you're essentially repeating what I'm saying. I intend to field test it (there's no articulable reason not to). There's no onus because no one has to make counter AI measures. It's their choice.
Apparently I'm an idiot for developing essential human like entity.
I don’t get what well I’m pissing into
The well is the social graph itself. You are polluting the conversations by adding content that is not original nor desirable.
I’m an idiot for developing essential human like entity.
You are an idiot because you are pushing AI slop to people who are asking you not to, while thinking that you work in something groundbreaking.
-
I don’t get what well I’m pissing into
The well is the social graph itself. You are polluting the conversations by adding content that is not original nor desirable.
I’m an idiot for developing essential human like entity.
You are an idiot because you are pushing AI slop to people who are asking you not to, while thinking that you work in something groundbreaking.
No idea what a social graph is meant to be, but people do shit post and meme even if people don't desire it or if the meme is reference.
Of course idiot means anyone that uses Ai. I'm not portraying myself as groundbreaking even though I did make the fedi-plays genre
-
People do things for fun sometimes.
This is not the same as playing basketball. Unleashing AI bots "just for the fun of it" ends up effectively poisoning the well.
It sounds like red teaming to me.
-
You want to write software that subverts the expectations of users (who are coming here with the expectation they will be chatting with other people) and abusing resources provided by others who did not ask you to help you with any sort of LLM detection.
Have you never heard of red teaming?
-
No idea what that is an that subreddit is dead
There's active successors.
-
Have you never heard of red teaming?
Red teams are hired by the companies that are looking for vulnerabilities. If you don't get explicit approval by the target to look for exploits, you are just a hacker who can (and should) go to jail.
-
There is and it's propaganda. Even I knew ai has been used in propaganda for months now
Absolutely, if you're seeing propaganda, it's because it's allowed on that instance. But the presence of propaganda has nothing to do if an account is an LLM or not.
-