Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Fediverse
  3. Is there any counter AI bots in the fediverse

Is there any counter AI bots in the fediverse

Scheduled Pinned Locked Moved Fediverse
fediverse
67 Posts 17 Posters 9 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • H [email protected]

    It is like I said. People on platforms like Reddit complain a lot about bots. This platform is kind of supposedto be the better version of that. Hence nit about the same negative dynamics. And I can still tell ChatGPT's uniquie style and a human apart. And once you go into detail, you'll notice the quirks or the intelligence of your conversational partner.

    P This user is from outside of this forum
    P This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #51

    Reddit is different than fediverse. They work on different principles and I argue fediverse is very libertarian.

    Is there anyway you can rule out survivorship bias? Plus I'm already doing preliminary stuff and I looking into making response shorter so that there's less information to go on and trying different models

    H 1 Reply Last reply
    0
    • P [email protected]

      Reddit is different than fediverse. They work on different principles and I argue fediverse is very libertarian.

      Is there anyway you can rule out survivorship bias? Plus I'm already doing preliminary stuff and I looking into making response shorter so that there's less information to go on and trying different models

      H This user is from outside of this forum
      H This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #52

      What kind of models are you planning to use? Some of the LLMs you run yourself? Or the usual ChatGPT/Grok/Claude?

      P 1 Reply Last reply
      0
      • P [email protected]

        The lllm bot is meant to provide the function of flagging llm bots that operate on the fediverse.

        I'm listening to you people but I'm not getting good reasons as to why I should do it in a certain way or even to not do it.

        Reason does take precedent over request. Y'all are strangers to me and none of you are actually attempting to hear me out as I doing for you.

        rglullis@communick.newsR This user is from outside of this forum
        rglullis@communick.newsR This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #53

        You were implying not just that you wanted to detect bots, but that you wanted to to write your own set of bots that would pretend to be humans.

        If your plan is only to write detection of bots, it's a whole different thing.

        P 1 Reply Last reply
        0
        • H [email protected]

          What kind of models are you planning to use? Some of the LLMs you run yourself? Or the usual ChatGPT/Grok/Claude?

          P This user is from outside of this forum
          P This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #54

          So far I've experimented with ollama3.2 (I don't have enough ram for 3.3). Deepseek r1 7b( discovered that it's verbose and asks a lot of questions) and I'll try phi4 later. I could use the chat-gpt models since I have tokens. Ironically I'm thinking about making a genetic algorithm of prompt templates and a confidence check. It's oddly meta

          H 1 Reply Last reply
          0
          • rglullis@communick.newsR [email protected]

            You were implying not just that you wanted to detect bots, but that you wanted to to write your own set of bots that would pretend to be humans.

            If your plan is only to write detection of bots, it's a whole different thing.

            P This user is from outside of this forum
            P This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #55

            Actually I asked if there were any counter AI measures there probably is just no one knows what it is . In the post I'm referring too I explicitly said I'd make a human like bot and I'm saying the anti AI crowd should make a program to flag AI bots.

            Now my project should help people that want to fight ai on the fediverse

            rglullis@communick.newsR 1 Reply Last reply
            0
            • P [email protected]

              Actually I asked if there were any counter AI measures there probably is just no one knows what it is . In the post I'm referring too I explicitly said I'd make a human like bot and I'm saying the anti AI crowd should make a program to flag AI bots.

              Now my project should help people that want to fight ai on the fediverse

              rglullis@communick.newsR This user is from outside of this forum
              rglullis@communick.newsR This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #56

              See, so now you are back to saying to your plan is to make a shitty thing and put the burden on those against it to come up with countermeasures. That's just lame.

              P 1 Reply Last reply
              0
              • rglullis@communick.newsR [email protected]

                See, so now you are back to saying to your plan is to make a shitty thing and put the burden on those against it to come up with countermeasures. That's just lame.

                P This user is from outside of this forum
                P This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #57

                I'm not burdening them. If they don't want to take on the they project they don't take it.

                rglullis@communick.newsR 1 Reply Last reply
                0
                • P [email protected]

                  I'm not burdening them. If they don't want to take on the they project they don't take it.

                  rglullis@communick.newsR This user is from outside of this forum
                  rglullis@communick.newsR This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #58

                  Ok, final message because I'm tired of this:

                  • you are openly admitting that you are going to piss on the well by adding a bot that pretends to be a human.
                  • you are openly admitting that you are going to do this without providing any form of mitigation.
                  • you are going to do this while pushing data to the whole network. No prior testing in a test instance, not even using your own instance for it.
                  • you think that is fine to leave the onus of "detecting" the bot to the others.

                  You are a complete idiot.

                  P 1 Reply Last reply
                  0
                  • rglullis@communick.newsR [email protected]

                    Ok, final message because I'm tired of this:

                    • you are openly admitting that you are going to piss on the well by adding a bot that pretends to be a human.
                    • you are openly admitting that you are going to do this without providing any form of mitigation.
                    • you are going to do this while pushing data to the whole network. No prior testing in a test instance, not even using your own instance for it.
                    • you think that is fine to leave the onus of "detecting" the bot to the others.

                    You are a complete idiot.

                    P This user is from outside of this forum
                    P This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #59

                    I've heard of poisoning the well, but I don't get what well I'm pissing into (no ones articulating anything). Yeah I admit I don't care much on the counter AI measures. Idk why you're essentially repeating what I'm saying. I intend to field test it (there's no articulable reason not to). There's no onus because no one has to make counter AI measures. It's their choice.

                    Apparently I'm an idiot for developing essential human like entity.

                    rglullis@communick.newsR 1 Reply Last reply
                    0
                    • P [email protected]

                      So far I've experimented with ollama3.2 (I don't have enough ram for 3.3). Deepseek r1 7b( discovered that it's verbose and asks a lot of questions) and I'll try phi4 later. I could use the chat-gpt models since I have tokens. Ironically I'm thinking about making a genetic algorithm of prompt templates and a confidence check. It's oddly meta

                      H This user is from outside of this forum
                      H This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #60

                      I often recommend Mistral-Nemo-Instruct I think that one strikes a good balance. But be careful with it, it's not censored. So given the right prompt, it might yell at people, talk about reproductive organs etc. All in all it's a job that takes some effort. You need a good model, come up with a good prompt. Maybe also give it a persona. And the entire framework to feed in the content, make decisions what to respond to. And if you want to do it right, and additional framework for safety and monitoring. I think that's the usual things for an AI bot.

                      1 Reply Last reply
                      0
                      • P [email protected]

                        I've heard of poisoning the well, but I don't get what well I'm pissing into (no ones articulating anything). Yeah I admit I don't care much on the counter AI measures. Idk why you're essentially repeating what I'm saying. I intend to field test it (there's no articulable reason not to). There's no onus because no one has to make counter AI measures. It's their choice.

                        Apparently I'm an idiot for developing essential human like entity.

                        rglullis@communick.newsR This user is from outside of this forum
                        rglullis@communick.newsR This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #61

                        I don’t get what well I’m pissing into

                        The well is the social graph itself. You are polluting the conversations by adding content that is not original nor desirable.

                        I’m an idiot for developing essential human like entity.

                        You are an idiot because you are pushing AI slop to people who are asking you not to, while thinking that you work in something groundbreaking.

                        P 1 Reply Last reply
                        0
                        • rglullis@communick.newsR [email protected]

                          I don’t get what well I’m pissing into

                          The well is the social graph itself. You are polluting the conversations by adding content that is not original nor desirable.

                          I’m an idiot for developing essential human like entity.

                          You are an idiot because you are pushing AI slop to people who are asking you not to, while thinking that you work in something groundbreaking.

                          P This user is from outside of this forum
                          P This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #62

                          No idea what a social graph is meant to be, but people do shit post and meme even if people don't desire it or if the meme is reference.

                          Of course idiot means anyone that uses Ai. I'm not portraying myself as groundbreaking even though I did make the fedi-plays genre

                          1 Reply Last reply
                          0
                          • rglullis@communick.newsR [email protected]

                            People do things for fun sometimes.

                            This is not the same as playing basketball. Unleashing AI bots "just for the fun of it" ends up effectively poisoning the well.

                            J This user is from outside of this forum
                            J This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #63

                            It sounds like red teaming to me.

                            1 Reply Last reply
                            0
                            • rglullis@communick.newsR [email protected]

                              You want to write software that subverts the expectations of users (who are coming here with the expectation they will be chatting with other people) and abusing resources provided by others who did not ask you to help you with any sort of LLM detection.

                              J This user is from outside of this forum
                              J This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #64

                              Have you never heard of red teaming?

                              rglullis@communick.newsR 1 Reply Last reply
                              0
                              • P [email protected]

                                No idea what that is an that subreddit is dead

                                J This user is from outside of this forum
                                J This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #65

                                There's active successors.

                                1 Reply Last reply
                                0
                                • J [email protected]

                                  Have you never heard of red teaming?

                                  rglullis@communick.newsR This user is from outside of this forum
                                  rglullis@communick.newsR This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #66

                                  Red teams are hired by the companies that are looking for vulnerabilities. If you don't get explicit approval by the target to look for exploits, you are just a hacker who can (and should) go to jail.

                                  1 Reply Last reply
                                  0
                                  • P [email protected]

                                    There is and it's propaganda. Even I knew ai has been used in propaganda for months now

                                    corgana@startrek.websiteC This user is from outside of this forum
                                    corgana@startrek.websiteC This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #67

                                    Absolutely, if you're seeing propaganda, it's because it's allowed on that instance. But the presence of propaganda has nothing to do if an account is an LLM or not.

                                    1 Reply Last reply
                                    0
                                    • System shared this topic on
                                    Reply
                                    • Reply as topic
                                    Log in to reply
                                    • Oldest to Newest
                                    • Newest to Oldest
                                    • Most Votes


                                    • Login

                                    • Login or register to search.
                                    • First post
                                      Last post
                                    0
                                    • Categories
                                    • Recent
                                    • Tags
                                    • Popular
                                    • World
                                    • Users
                                    • Groups