Instagram Is Full Of Openly Available AI-Generated Child Abuse Content.
-
Nucleo's investigation identified accounts with thousands of followers with illegal behavior that Meta's security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts
-
-
Nucleo's investigation identified accounts with thousands of followers with illegal behavior that Meta's security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts
Some day, kids using social media will be child abuse...
-
Nucleo's investigation identified accounts with thousands of followers with illegal behavior that Meta's security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts
Parents should get their kids to never touch anything “Meta” made or brought.
But then again, them same parents are currently telling the world what their neighbours are doing, what they’re eating and how cute did “insert name here” look in their new school uniform.
️
-
Parents should get their kids to never touch anything “Meta” made or brought.
But then again, them same parents are currently telling the world what their neighbours are doing, what they’re eating and how cute did “insert name here” look in their new school uniform.
️
-
They are also providing Meta with free age progression training material when they upload pictures of their kids each year on the first day of school
-
Nucleo's investigation identified accounts with thousands of followers with illegal behavior that Meta's security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts
-
Nucleo's investigation identified accounts with thousands of followers with illegal behavior that Meta's security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts
... Meta's security systems were unable to identify...
I think you mean incentivized to ignore
-
Nucleo's investigation identified accounts with thousands of followers with illegal behavior that Meta's security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts
IG is a total fascist shithole. I closed my "political" acct because all of the sponsored content was fascist trash: zionism, flat earthism, qanon, racist stuff, anti-vax, etc.
Switched to Pixelfed and RSS... and Lemmy ofc.
-
Nucleo's investigation identified accounts with thousands of followers with illegal behavior that Meta's security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts
after contact, the company acknowledged the problem and removed the accounts
Meta is outsourcing content moderation to journalists.
-
Nucleo's investigation identified accounts with thousands of followers with illegal behavior that Meta's security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts
Child Sexual Abuse Material is abhorrent because children were literally abused to create it.
AI generated content, though disgusting, is not even remotely on the same level.
The moral panic around AI that leads to implying that these things are the same thing is absurd.
Go after the people filming themselves literally gang raping toddlers, not the people typing forbidden words into an image generator.
Don't dilute the horror of the production CSAM by equating it to fake pictures.
-
Child Sexual Abuse Material is abhorrent because children were literally abused to create it.
AI generated content, though disgusting, is not even remotely on the same level.
The moral panic around AI that leads to implying that these things are the same thing is absurd.
Go after the people filming themselves literally gang raping toddlers, not the people typing forbidden words into an image generator.
Don't dilute the horror of the production CSAM by equating it to fake pictures.
Yes at a cursory glance that's true. AI generated images don't involve the abuse of children, that's great. The problem is what the follow-on effects of this is. What's to stop actual child abusers from just photoshopping a 6th finger onto their images and then claiming that it's AI generated?
AI image generation is getting absurdly good now, nearly indistinguishable from actual pictures. By the end of the year I suspect they will be truly indistinguishable. When that happens, how to do you tell which images are AI generated and which are real? How do you know who is peddling real CP and who isn't if AI-generated CP is legal?
-
Yes at a cursory glance that's true. AI generated images don't involve the abuse of children, that's great. The problem is what the follow-on effects of this is. What's to stop actual child abusers from just photoshopping a 6th finger onto their images and then claiming that it's AI generated?
AI image generation is getting absurdly good now, nearly indistinguishable from actual pictures. By the end of the year I suspect they will be truly indistinguishable. When that happens, how to do you tell which images are AI generated and which are real? How do you know who is peddling real CP and who isn't if AI-generated CP is legal?
-
Yes at a cursory glance that's true. AI generated images don't involve the abuse of children, that's great. The problem is what the follow-on effects of this is. What's to stop actual child abusers from just photoshopping a 6th finger onto their images and then claiming that it's AI generated?
AI image generation is getting absurdly good now, nearly indistinguishable from actual pictures. By the end of the year I suspect they will be truly indistinguishable. When that happens, how to do you tell which images are AI generated and which are real? How do you know who is peddling real CP and who isn't if AI-generated CP is legal?
What's the follow on effect from making generated images illegal?
Do you want your freedom to be at stake where the question before the Jury is "How old is this image of a person (that doesn't exist?)". "Is this fake person TOO child-like?"
When that happens, how do you tell which images are AI generated and which are real? How do you know who is peddling real CP and who isn't if AI-generated CP is legal?
You won't be able to tell, we can assume that this is a given.
So the real question is:
Who are you trying to arrest and put in jail and how are you going to write that difference into law so that innocent people are not harmed by the justice system?
To me, the evil people are the ones harming actual children. Trying to blur the line between them and people who generate images is a morally confused position.
There's a clear distinction between the two groups and that distinction is that one group is harming people.
-
Nucleo's investigation identified accounts with thousands of followers with illegal behavior that Meta's security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts
-
Child Sexual Abuse Material is abhorrent because children were literally abused to create it.
AI generated content, though disgusting, is not even remotely on the same level.
The moral panic around AI that leads to implying that these things are the same thing is absurd.
Go after the people filming themselves literally gang raping toddlers, not the people typing forbidden words into an image generator.
Don't dilute the horror of the production CSAM by equating it to fake pictures.
Although that's true, such material can easily be used to groom children which is where I think the real danger lies.
I really wish they had excluded children in the datasets.
You can't really put a stop to it anymore but I don't think it should be something that's normalized and accepted just because there isn't a direct victim anymore.
-
Nucleo's investigation identified accounts with thousands of followers with illegal behavior that Meta's security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts
Meta doesn’t care about AI generated content. There are thousands of fake accounts with varying quality of AI generated content and reporting them does exactly shit.
-
Although that's true, such material can easily be used to groom children which is where I think the real danger lies.
I really wish they had excluded children in the datasets.
You can't really put a stop to it anymore but I don't think it should be something that's normalized and accepted just because there isn't a direct victim anymore.
-
Nucleo's investigation identified accounts with thousands of followers with illegal behavior that Meta's security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts
-
Kids will do things if they see other children doing it in pictures and videos. It's easier to normalize sexual behavior with cp then without.
-
What if it features real kid’s faces?