Instagram Is Full Of Openly Available AI-Generated Child Abuse Content.
-
Found the guy who watches said content. I hope you never plan on having kids.
-
Kids will do things if they see other children doing it in pictures and videos. It's easier to normalize sexual behavior with cp then without.
-
This sounds like you're searching really hard for a reason to justify banning it. Pretty tenuous "what if" there.
Like, a dildo could hypothetically be used to sexualize a child. Should we ban dildos?
It's so vague it could apply to anything.
Banning the tech, banning generated cp on the internet or banning it at home?
I'm a big advocate of AI and don't personally want any kind of banning or censorship of the tools.
I don't think it should be published on any kind of image sharing sites. I don't hold people publishing it in high regards and I'm not against some kind of consequence. I generally view prison as unproductive though.
At home, I'm not sure. People imo can do what they want behind closed doors. I don't want any kind of surveillance but I don't know how I would react if it got brought up at a trial, as a kind of proof if the allegations have something to do with that theme (child molestation).
-
Banning the tech, banning generated cp on the internet or banning it at home?
I'm a big advocate of AI and don't personally want any kind of banning or censorship of the tools.
I don't think it should be published on any kind of image sharing sites. I don't hold people publishing it in high regards and I'm not against some kind of consequence. I generally view prison as unproductive though.
At home, I'm not sure. People imo can do what they want behind closed doors. I don't want any kind of surveillance but I don't know how I would react if it got brought up at a trial, as a kind of proof if the allegations have something to do with that theme (child molestation).
It would probably make me distrust the prosecution, like if they're bringing this up they must not have much to go on. Like every time a black man is shot by police they bring up that he smoked weed.
I guess my main complaint is that it's insane to view it as equivalent to real CP, and it's harmful to waste any resources prosecuting it.
-
With a set of all images on the internet. Why do you people always think this is a "gotcha"?
I've been assuming it's because they truly have no idea how this tech works
-
Lmao you said it makes no sense in regards to the material being used to groom children. I don't need to hold your hand in that thought process on why it can potentially groom or foster that behavior. Go eat curb
-
Nucleo's investigation identified accounts with thousands of followers with illegal behavior that Meta's security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts
When I saw this, 2 questions came to mind: How come that this isn't immediately reported? Why would anyone upload illegal material to a platform that tracks as thoroughly as Meta's do?
The answer is:
All of those accounts followed the same visual pattern: blonde characters with voluptuous bodies and ample breasts, blue eyes, and childlike faces.
The 1 question that came to mind upon reading this is: What?
-
The most compelling argument against AI generated child porn I have heard is it normalizes it and makes it more likely people will be unable to tell if it is real or AI. This allows actual children to get hurt when it is not reported or skimmed over because someone thought it was AI.
-
I've been assuming it's because they truly have no idea how this tech works
Hey.
I've been in tech for 20 years. I know python, Java, c#. I've worked with tensorflow and language models. I understand this stuff.
You absolutely could train an AI on safe material to do what you're saying.
Stable diffusion and openai have not guaranteed that they trained their AI on safe materials.
It's like going to buy a burger, and the restaurant says "We can't guarantee there's no human meat in here". At best it's lazy. At worst it's abusive.
-
Hey.
I've been in tech for 20 years. I know python, Java, c#. I've worked with tensorflow and language models. I understand this stuff.
You absolutely could train an AI on safe material to do what you're saying.
Stable diffusion and openai have not guaranteed that they trained their AI on safe materials.
It's like going to buy a burger, and the restaurant says "We can't guarantee there's no human meat in here". At best it's lazy. At worst it's abusive.
I mean, there is no photograph of a school bus with pegasus wings diving to the titanic, but I bet one of these AIs can crank out that picture. If it can do that...?
-
after contact, the company acknowledged the problem and removed the accounts
Meta is outsourcing content moderation to journalists.
Meta profits from these accounts, it also profits off scams and fraud posts, because they pay for ad space. They have literally no incentive to moderate beyond the bare minimum their automatic tools do
-
The most compelling argument against AI generated child porn I have heard is it normalizes it and makes it more likely people will be unable to tell if it is real or AI. This allows actual children to get hurt when it is not reported or skimmed over because someone thought it was AI.
As a counterpart, the fact that it is so easy and simple to get those AI images, compared to the risk and extra effort of doing it for real, could make the actual child abuse become less common and less profitable for mafias and assholes in general. It's a really complex topic that no simple straight answer would solve.
Normalising it would be horrible and should be avoided, but there will always be some amount of people looking for that content. I rather have them using AI to create it than having to go searching for real content. Persecuting the AI content is not only very inefficient, it might also be harmful as the only other content left would be the real one that is much harder to catch those who make it.
-
Shh! We're trying to ragebait here! Be outraged!
-
Nucleo's investigation identified accounts with thousands of followers with illegal behavior that Meta's security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts
Stuff like this is a good ad for Pixelfed.
-
On the one hand, DNS was being needlessly accusatory and the logic of 'you don't understand how predators work so you must be one' is silly. On the other hand, I get why they're being so caustic because YES CP is ABSOLUTELY used exactly how they describe. The idea is that by getting the child used to sexual activity, they'll get used to thinking about sexual activity and won't be as freaked by inappropriate propositions, perhaps even believing they're the initiator instead of being manipulated and taken advantage of, and then they won't report the predator to authorities. Not to mention some of the predators who actually feel child attraction (as opposed to the more than 50% who are just rapists of opportunity) use that manufactured consent to self-delude themselves into thinking 'well they're enjoying it and they said yes so I'm not REALLY doing anything wrong'.
Part 4 on this article interviewing someone who was trying to research pedos on the dark web, for one example: "In other words, there are child molestation crusaders out there, and Pam ran into a lot of this on the Deep Web. Below is one response to a 7axxn post from a guy, bemoaning his inability to be anything but a "leech" (a person who consumes the content but never submits any) because his family situation made it impossible to actively share child pornography. The other members suggested he could aid "the cause" by helping to "enlighten & educate" the children in his life on the "true philosophies of love"" https://www.cracked.com/personal-experiences-1760-5-things-i-learned-infiltrating-deep-web-child-molesters.html
-
When I saw this, 2 questions came to mind: How come that this isn't immediately reported? Why would anyone upload illegal material to a platform that tracks as thoroughly as Meta's do?
The answer is:
All of those accounts followed the same visual pattern: blonde characters with voluptuous bodies and ample breasts, blue eyes, and childlike faces.
The 1 question that came to mind upon reading this is: What?
My guess is that the algorithm is really good at predicting who will be likely to follow that kind of content, rather than report it. Basically, it flies under the radar purely because the only people who see it are the ones who have a vested interest in it flying under the radar.
-
Hey.
I've been in tech for 20 years. I know python, Java, c#. I've worked with tensorflow and language models. I understand this stuff.
You absolutely could train an AI on safe material to do what you're saying.
Stable diffusion and openai have not guaranteed that they trained their AI on safe materials.
It's like going to buy a burger, and the restaurant says "We can't guarantee there's no human meat in here". At best it's lazy. At worst it's abusive.
Ok, but by that definition Google should be banned because their trawler isn't guaranteed to not pick up CP.
In my opinion, if the technology involves casting a huge net, and then creating an abstracted product from what is caught in the net, with no steps in between seen by a human, then is it really causing any sort of actual harm?
-
When I saw this, 2 questions came to mind: How come that this isn't immediately reported? Why would anyone upload illegal material to a platform that tracks as thoroughly as Meta's do?
The answer is:
All of those accounts followed the same visual pattern: blonde characters with voluptuous bodies and ample breasts, blue eyes, and childlike faces.
The 1 question that came to mind upon reading this is: What?
I’m a little confused as to how it can still be AI CSAM if the bodies are voluptuous and the breasts are ample. Childlike faces have been the bread and butter of face filters for years.
Which parts specifically have to be childlike for it to be AI CSAM? This is why we need some laws ASAP.