Instagram Is Full Of Openly Available AI-Generated Child Abuse Content.
-
As a counterpart, the fact that it is so easy and simple to get those AI images, compared to the risk and extra effort of doing it for real, could make the actual child abuse become less common and less profitable for mafias and assholes in general. It's a really complex topic that no simple straight answer would solve.
Normalising it would be horrible and should be avoided, but there will always be some amount of people looking for that content. I rather have them using AI to create it than having to go searching for real content. Persecuting the AI content is not only very inefficient, it might also be harmful as the only other content left would be the real one that is much harder to catch those who make it.
I rather have them using AI to create it than having to go searching for real content.
A rebuttal to this that I've read is that the easy access may encourage people to dig into it and eventually want "the real thing"... but regardless, with it being FOSS, there's no easy way to stop it anyway... It's just a Pandora's box that we can never close.
-
I rather have them using AI to create it than having to go searching for real content.
A rebuttal to this that I've read is that the easy access may encourage people to dig into it and eventually want "the real thing"... but regardless, with it being FOSS, there's no easy way to stop it anyway... It's just a Pandora's box that we can never close.
And I could rebute to that, that if someone is interested enough to check it with AI then they were likely to try and check it anyway without AI, maybe it would take longer, it would be harder to find... But they'd be the intended audience that now are redirected elsewhere.
To quote myself:
It's a really complex topic that no simple straight answer would solve.
We could rebute again and again and again, and get nowhere because either option is hard to discuss as it is simply impossible to give proper data to prove anything. And worse, when defending the use of AI for it can lead to being told you are allowing it in the first place and that's not even telling how many people still believe that AI needs real sample images to produce those (whether the algorithm is trained or not on CP is irrelevant on this particular point, as it is not needed to be created)
-
I’m a little confused as to how it can still be AI CSAM if the bodies are voluptuous and the breasts are ample. Childlike faces have been the bread and butter of face filters for years.
Which parts specifically have to be childlike for it to be AI CSAM? This is why we need some laws ASAP.
Things that you want to understand but sure as fuck ain't gonna Google.
-
Parents should get their kids to never touch anything “Meta” made or brought.
But then again, them same parents are currently telling the world what their neighbours are doing, what they’re eating and how cute did “insert name here” look in their new school uniform.
️
to bad vrs got a hold and vrchats so much worse than the internet chatrooms we grew up with
-
Yes at a cursory glance that's true. AI generated images don't involve the abuse of children, that's great. The problem is what the follow-on effects of this is. What's to stop actual child abusers from just photoshopping a 6th finger onto their images and then claiming that it's AI generated?
AI image generation is getting absurdly good now, nearly indistinguishable from actual pictures. By the end of the year I suspect they will be truly indistinguishable. When that happens, how to do you tell which images are AI generated and which are real? How do you know who is peddling real CP and who isn't if AI-generated CP is legal?
What's to stop actual child abusers from just photoshopping a 6th finger onto their images and then claiming that it's AI generated?
Aside from the other arguments people have presented, this wrecks one of the largest reasons that people produce CSAM. Pedophiles are insular data hoarders by necessity, because actually creating and procuring it is such a big risk. Every time they go online to find new content, they’re at risk of stumbling into a honeypot. And producing it requires IRL work, and a LOT of risk of being caught/turned in by the victim. They tend to form tight-knit rings, and one of the only reliable ways to get into a ring as an outsider is to provide your own CSAM to the others. CSAM is traded in these rings like baseball cards, where you need fresh content in order to receive fresh content.
The data hoarding side of things is where all of the “cops bust pedophile with 100TB of CSAM” headlines come from; In reality, it was probably like 1TB of videos, (which is a lot, but not unheard of) but was backed up multiple times in multiple places, because losing it would be catastrophic for the CSAM producer; They can’t simply go grab a new blue ray of it. And the cops counted the full size of each backup disk, not just the space that was used.
Intentionally marking your content as AI-generated would ruin the trading value, because nobody will see it as valuable/worth trading for if it’s fake. At best, you won’t get anything for it. At worst, you’d be labeled a cop trying to pass off AI content to gather evidence.
-
It would probably make me distrust the prosecution, like if they're bringing this up they must not have much to go on. Like every time a black man is shot by police they bring up that he smoked weed.
I guess my main complaint is that it's insane to view it as equivalent to real CP, and it's harmful to waste any resources prosecuting it.
That's fair. We can also expect proper moderation from social media sites. I'm okay with a light touch but It shouldn't be floating around if you get what I mean.
-
My guess is that the algorithm is really good at predicting who will be likely to follow that kind of content, rather than report it. Basically, it flies under the radar purely because the only people who see it are the ones who have a vested interest in it flying under the radar.
Look again. The explanation is that these images simply don't look like any kind of CSAM. The whole story looks like some sort of scam to me.
-
On the one hand, DNS was being needlessly accusatory and the logic of 'you don't understand how predators work so you must be one' is silly. On the other hand, I get why they're being so caustic because YES CP is ABSOLUTELY used exactly how they describe. The idea is that by getting the child used to sexual activity, they'll get used to thinking about sexual activity and won't be as freaked by inappropriate propositions, perhaps even believing they're the initiator instead of being manipulated and taken advantage of, and then they won't report the predator to authorities. Not to mention some of the predators who actually feel child attraction (as opposed to the more than 50% who are just rapists of opportunity) use that manufactured consent to self-delude themselves into thinking 'well they're enjoying it and they said yes so I'm not REALLY doing anything wrong'.
Part 4 on this article interviewing someone who was trying to research pedos on the dark web, for one example: "In other words, there are child molestation crusaders out there, and Pam ran into a lot of this on the Deep Web. Below is one response to a 7axxn post from a guy, bemoaning his inability to be anything but a "leech" (a person who consumes the content but never submits any) because his family situation made it impossible to actively share child pornography. The other members suggested he could aid "the cause" by helping to "enlighten & educate" the children in his life on the "true philosophies of love"" https://www.cracked.com/personal-experiences-1760-5-things-i-learned-infiltrating-deep-web-child-molesters.html
-
When I saw this, 2 questions came to mind: How come that this isn't immediately reported? Why would anyone upload illegal material to a platform that tracks as thoroughly as Meta's do?
The answer is:
All of those accounts followed the same visual pattern: blonde characters with voluptuous bodies and ample breasts, blue eyes, and childlike faces.
The 1 question that came to mind upon reading this is: What?
ai generated content is like plastic pollution
-
The most compelling argument against AI generated child porn I have heard is it normalizes it and makes it more likely people will be unable to tell if it is real or AI. This allows actual children to get hurt when it is not reported or skimmed over because someone thought it was AI.
-