25 arrested in global hit against AI-generated child sexual abuse material
-
considering style transfer models, you could probably just draw or 3d model unknown details and feed it that.
-
I totally agree with these guys being arrested. I want to get that out of the way first.
But what crime did they commit? They didn't abuse children...they are AI generated and do not exist. What they did is obviously disgusting and makes me want to punch them in the face repeatedly until it's flat, but where's the line here? If they draw pictures of non-existent children is that also a crime?
Does that open artists to the interpretation of the law when it comes to art? Can they be put in prison because they did a professional painting of a child? Like what if they did a painting of their own child in the bath or something? Sure the contents questionable but it's not exactly predatory. And if you add safeguards for these people could then not the predators just claim artistic expression?
It just seems entirely unenforceable and an entire goddamn can of worms...
-
Again, that's not how image generators work.
You can't just make up some wishful thinking and assume that's how it must work.
It takes thousands upon housands of unique photos to make an image generator.
Are you going to draw enough child genetalia to train these generators? Are you actually comfortable doing that task?
-
Exactly, which is why I'm against your first line, I don't want them arrested specifically because of artistic expression. I think they're absolutely disgusting and should stop, but they're not harming anyone so they shouldn't go to jail.
In my opinion, you should only go to jail if there's an actual victim. Who exactly is the victim here?
-
only way
That's just not true.
That said, there's a decent chance that existing models use real images, and that is what we should be fighting against. The user of a model has plausible deniability because there's a good chance they don't understand how they work, but the creators of the model should absolutely know where they're getting the source data from.
Prove that the models use illegal material and go after the model creators for that, because that's an actual crime. Don't go after people using the models who are providing alternatives to abusive material.
-
Exactly. If there's no victim, there's no crime.
-
Exactly. Any time there's subjectivity, it's ripe for abuse.
The law should punish:
- creating images of actual underage people
- creating images of actual non-consenting people of legal age
- knowingly distributing one of the above
Each of those has a clearly identifiable victim. Creating a new work of a fictitious person doesn't have any clearly identifiable victim.
Don't make laws to make prosecution easier, make laws to protect actual people from becoming victims or at least punish those who victimize others.
-
I could live with this kind of thing being classified as a misdemeanor provided the creator didn’t use underage subjects to train or influence the output.
So could I, but that doesn't make it just. It should only be a crime if someone is actually harmed, or intended to be harmed.
Creating a work about a fictitious individual shouldn't be illegal, regardless of how distasteful the work is.
-
I think all are unethical, and any service offering should be shut down yes.
I never said prosecute the user's.
I said you can't make it ethically, because at some point, someone is using/creating original art and the odds of human explotations at some point in the chain are just too high.
-
It obviously depends on where they live and/or committed the crimes. But most countries have broad laws against anything, real or fake, that depicts CSAM.
It both because as technology gets better it would be easy for offenders to claims anything they’ve been caught with is AI created.
It’s also because there’s a belief that AI generated CSAM encourages real child abuse.
I shan’t say whether it does - I tend to believe so but haven’t seen data to prove me right or wrong.
Also, at the end, I think it’s simply an ethical position.
-
the odds of human explotations at some point in the chain are just too high
We don't punish people based on odds. At least in the US, the standard is that they're guilty "beyond a reasonable doubt." As in, there's virtually no possibility that they didn't commit the crime. If there's a 90% chance someone is guilty, but a 10% chance they're completely innocent, most would agree that there's reasonable doubt, so they shouldn't be convicted.
If you can't prove that they made it unethically, and there are methods to make it ethically, then you have reasonable doubt. All the defense needs to do is demonstrate one such method of producing it ethically, and that creates reasonable doubt.
Services should only be shut down if they're doing something illegal. Prove that the images are generated using CSAM as source material and then shut down any service that refuses to remove it, or who can be proved as knowing "beyond a reasonable doubt" that they were committing a crime. That's how the law works, you only punish people you can prove "beyond a reasonable doubt" were committing a crime.
-
i'm not, no. but i'm also well-enough versed in stable diffusion and loras that i know that even a model with no training on a particular topic can be made to produce it with enough tweaking, and if the results are bad you can plug in an extra model trained on at minimum 10-50 images to significantly improve them.
-
I'm afraid Europol is shooting themselves in the foot here.
What should be done is better ways to mark and identify AI-generated content, not a carpet ban and criminalization.
Let whoever happens to crave CSAM (remember: sexuality, however perverted or terrible it is, is not a choice) use the most harmless outlet - otherwise, they may just turn to the real materials, and as continuous investigations suggest, there's no shortage of supply or demand on that front. If everything is illegal, and some of that is needed anyway, it's easier to escalate, and that's dangerous.
As sickening as it may sound to us, these people often need something, or else things are quickly gonna go downhill. Give them their drawings.
-
I actually do not agree with them being arrested.
While I recognize the issue of identification posed in the article, I hold a strong opinion it should be tackled in another way.
AI-generated CSAM might be a powerful tool to reduce demand for the content featuring real children. If we leave it legal to watch and produce, and keep the actual materials illegal, we can make more pedophiles turn to what is less harmful and impactful - a computer-generated image that was produced with no children being harmed.
By introducing actions against AI-generated materials, they make such materials as illegal as the real thing, and there's one less reason for an interested party not to go to a CSAM site and watch actual children getting abused, perpetuating the cycle and leading to more real-world victims.
-
That's exactly how they work. According to many articles I've seen in the past, one of the most common models used for this purpose is Stable Diffusion. For all we know, this model was never fed with any CSAM materials, but it seems to be good enough for people to get off - which is exactly what matters.
-
Okay, but my point still stands.
Someone has to make the genitals models to learn from. Some human has to be involved otherwise it wouldn't just exist.
And if your not willing to get your hands dirty and do it, why would anyone else?
-
How can it be made ethically?
That's my point.
It can't.
Some human has to sit and make many, many, many models of genitals to produce an artificial one.
And that, IMO is not ethically possible.
-
How can it be trained to produce something without human input.
To verify it's models are indeed correct, some human has to sit and view it.
Will that be you?
-
You can download the models and compile them yourself, that will be as effective as the US government was at banning encryption.
-
Same with misinformation. Where anything they disagree with, in good faith or not, is misinformation.