Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. I couldn't agree more.

I couldn't agree more.

Scheduled Pinned Locked Moved Technology
9 Posts 8 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • O This user is from outside of this forum
    O This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #1

    I couldn't agree more. Human moderators, especially unpaid ones simply aren't the way to go and Lemmy is a perfect example of this. Blocking users and communities and using content filters works to some extent but is extemely blunt tool with a ton of collateral damage. I'd much rather tell an AI moderator what I'm interested in seeing and what not and have it analyze the content to see what needs to be filtered out.

    Take this thread for example:

    Cool. I think he should piss on the 3rd rail.

    This pukebag is just as bad as Steve. Fuck both of them.

    What a cunt.

    How else is anyone going to filter out hateful content like this with zero value without an intelligent moderation system? People are coming up with new insults faster than I can keep adding them to the filter list. AI could easily filter out 95% of toxic content like this.

    M tabular@lemmy.worldT W V 4 Replies Last reply
    0
    • System shared this topic on
    • O [email protected]

      I couldn't agree more. Human moderators, especially unpaid ones simply aren't the way to go and Lemmy is a perfect example of this. Blocking users and communities and using content filters works to some extent but is extemely blunt tool with a ton of collateral damage. I'd much rather tell an AI moderator what I'm interested in seeing and what not and have it analyze the content to see what needs to be filtered out.

      Take this thread for example:

      Cool. I think he should piss on the 3rd rail.

      This pukebag is just as bad as Steve. Fuck both of them.

      What a cunt.

      How else is anyone going to filter out hateful content like this with zero value without an intelligent moderation system? People are coming up with new insults faster than I can keep adding them to the filter list. AI could easily filter out 95% of toxic content like this.

      M This user is from outside of this forum
      M This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #2

      Interesting fact: many bigger Lemmy instances are already using AI systems to filter out dangerous content in pictures before they even get uploaded.

      Context: Last year there was a big spam attack of CSAM and gore on multiple instances. Some had to shut down temporarily because they couldn't keep up with moderation. I don't remember the name of the tool, but some people made a program that uses AI to try and recognize these types of images and filter them out. This heavily reduced the amount of moderation needed during these attacks.

      Early AI moderation systems are actually something more platforms should use. Human moderators, even paid ones, shouldn't need to go though large amounts of violent content every day. Moderators at Facebook have been arguing these points for a while now, many of which have gotten mental issues though their work and don't get any medical support. So no matter what you think of AI and if it's moral, this is actually one of the few good applications in my opinion

      M S 2 Replies Last reply
      0
      • M [email protected]

        Interesting fact: many bigger Lemmy instances are already using AI systems to filter out dangerous content in pictures before they even get uploaded.

        Context: Last year there was a big spam attack of CSAM and gore on multiple instances. Some had to shut down temporarily because they couldn't keep up with moderation. I don't remember the name of the tool, but some people made a program that uses AI to try and recognize these types of images and filter them out. This heavily reduced the amount of moderation needed during these attacks.

        Early AI moderation systems are actually something more platforms should use. Human moderators, even paid ones, shouldn't need to go though large amounts of violent content every day. Moderators at Facebook have been arguing these points for a while now, many of which have gotten mental issues though their work and don't get any medical support. So no matter what you think of AI and if it's moral, this is actually one of the few good applications in my opinion

        M This user is from outside of this forum
        M This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #3

        Moderators at Facebook have been arguing these points for a while now, many of which have gotten mental issues though their work and don’t get any medical support

        How in the actual hell can Facebook not provide medical support to these people, after putting them through actual hell? That is actively evil of them.

        M B 2 Replies Last reply
        0
        • O [email protected]

          I couldn't agree more. Human moderators, especially unpaid ones simply aren't the way to go and Lemmy is a perfect example of this. Blocking users and communities and using content filters works to some extent but is extemely blunt tool with a ton of collateral damage. I'd much rather tell an AI moderator what I'm interested in seeing and what not and have it analyze the content to see what needs to be filtered out.

          Take this thread for example:

          Cool. I think he should piss on the 3rd rail.

          This pukebag is just as bad as Steve. Fuck both of them.

          What a cunt.

          How else is anyone going to filter out hateful content like this with zero value without an intelligent moderation system? People are coming up with new insults faster than I can keep adding them to the filter list. AI could easily filter out 95% of toxic content like this.

          tabular@lemmy.worldT This user is from outside of this forum
          tabular@lemmy.worldT This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #4
          This post did not contain any content.
          1 Reply Last reply
          0
          • O [email protected]

            I couldn't agree more. Human moderators, especially unpaid ones simply aren't the way to go and Lemmy is a perfect example of this. Blocking users and communities and using content filters works to some extent but is extemely blunt tool with a ton of collateral damage. I'd much rather tell an AI moderator what I'm interested in seeing and what not and have it analyze the content to see what needs to be filtered out.

            Take this thread for example:

            Cool. I think he should piss on the 3rd rail.

            This pukebag is just as bad as Steve. Fuck both of them.

            What a cunt.

            How else is anyone going to filter out hateful content like this with zero value without an intelligent moderation system? People are coming up with new insults faster than I can keep adding them to the filter list. AI could easily filter out 95% of toxic content like this.

            W This user is from outside of this forum
            W This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #5

            Look, Reddit bad, AI bad. Engaging with anything more that the most surface level reactions is hard so why bother?

            1 Reply Last reply
            0
            • M [email protected]

              Moderators at Facebook have been arguing these points for a while now, many of which have gotten mental issues though their work and don’t get any medical support

              How in the actual hell can Facebook not provide medical support to these people, after putting them through actual hell? That is actively evil of them.

              M This user is from outside of this forum
              M This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #6

              I agree, but it's also not surprising. I think somebody else posted the article about kenyan Facebook moderators in this comment section somewhere if you want to know more

              1 Reply Last reply
              0
              • O [email protected]

                I couldn't agree more. Human moderators, especially unpaid ones simply aren't the way to go and Lemmy is a perfect example of this. Blocking users and communities and using content filters works to some extent but is extemely blunt tool with a ton of collateral damage. I'd much rather tell an AI moderator what I'm interested in seeing and what not and have it analyze the content to see what needs to be filtered out.

                Take this thread for example:

                Cool. I think he should piss on the 3rd rail.

                This pukebag is just as bad as Steve. Fuck both of them.

                What a cunt.

                How else is anyone going to filter out hateful content like this with zero value without an intelligent moderation system? People are coming up with new insults faster than I can keep adding them to the filter list. AI could easily filter out 95% of toxic content like this.

                V This user is from outside of this forum
                V This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #7

                Translation: An AI would allow me to maybe have an echo chamber since human moderators won't work for me for free.

                1 Reply Last reply
                0
                • M [email protected]

                  Moderators at Facebook have been arguing these points for a while now, many of which have gotten mental issues though their work and don’t get any medical support

                  How in the actual hell can Facebook not provide medical support to these people, after putting them through actual hell? That is actively evil of them.

                  B This user is from outside of this forum
                  B This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #8

                  The real answer? They use people in countries like Nigeria that have fewer laws

                  1 Reply Last reply
                  0
                  • M [email protected]

                    Interesting fact: many bigger Lemmy instances are already using AI systems to filter out dangerous content in pictures before they even get uploaded.

                    Context: Last year there was a big spam attack of CSAM and gore on multiple instances. Some had to shut down temporarily because they couldn't keep up with moderation. I don't remember the name of the tool, but some people made a program that uses AI to try and recognize these types of images and filter them out. This heavily reduced the amount of moderation needed during these attacks.

                    Early AI moderation systems are actually something more platforms should use. Human moderators, even paid ones, shouldn't need to go though large amounts of violent content every day. Moderators at Facebook have been arguing these points for a while now, many of which have gotten mental issues though their work and don't get any medical support. So no matter what you think of AI and if it's moral, this is actually one of the few good applications in my opinion

                    S This user is from outside of this forum
                    S This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #9

                    Old-school AI, like automod, or LLM/genAI AI mod / image recognition tools?

                    I'd need to see some kind of proof Lemmy instances were using LLM mod tools; I'd be very interested.

                    1 Reply Last reply
                    0
                    • System shared this topic on
                    Reply
                    • Reply as topic
                    Log in to reply
                    • Oldest to Newest
                    • Newest to Oldest
                    • Most Votes


                    • Login

                    • Login or register to search.
                    • First post
                      Last post
                    0
                    • Categories
                    • Recent
                    • Tags
                    • Popular
                    • World
                    • Users
                    • Groups