GenAI website goes dark after explicit fakes exposed
-
This post did not contain any content.
Ai cp, they found AI generated cp that had been generated on their service...
Explicit fakes makes it sound less bad.
They were allowing AI cp to be made.
-
Ai cp, they found AI generated cp that had been generated on their service...
Explicit fakes makes it sound less bad.
They were allowing AI cp to be made.
This is the type of shit that radicalizes me against generative AI. It's done so much more harm than good.
-
This is the type of shit that radicalizes me against generative AI. It's done so much more harm than good.
The craziest thing to me is there was movements to advocate the creation of CP through AI to help those addicted to it as it "wasn't real" and there were no victims involved in it. But no comments regarding how the LLM gained the models to generate those images or the damages that will come when such things get normalized.
It just should never be normalized or exist.
-
The craziest thing to me is there was movements to advocate the creation of CP through AI to help those addicted to it as it "wasn't real" and there were no victims involved in it. But no comments regarding how the LLM gained the models to generate those images or the damages that will come when such things get normalized.
It just should never be normalized or exist.
Anything like that involving children or child like individuals is a hard fucking no from me. It's like those mfs who have art of a little anime girl and go "actually shes a 5000 vampire." They know exactly what the fuck they're doing. I also hate the argument of "it's not real" like mf the sentiment is still there.
-
The craziest thing to me is there was movements to advocate the creation of CP through AI to help those addicted to it as it "wasn't real" and there were no victims involved in it. But no comments regarding how the LLM gained the models to generate those images or the damages that will come when such things get normalized.
It just should never be normalized or exist.
Probably got all the data to train for it from the pentagon. They're known for having tons of it and a lot of their staff (more than 25%) are used to seeing it frequently.
Easily searchable, though I don't like to search for that shit, but here's 1 post if you literally add pentagon to c____ p___ in a search a million articles on DIFFERENT subjects (than this house bill) come up https://thehill.com/policy/cybersecurity/451383-house-bill-aims-to-stop-use-of-pentagon-networks-for-sharing-child/
-
Ai cp, they found AI generated cp that had been generated on their service...
Explicit fakes makes it sound less bad.
They were allowing AI cp to be made.
Is “CP” so you don’t get flagged, or is it for sensitivity.
-
Is “CP” so you don’t get flagged, or is it for sensitivity.
I don't like saying the full phrase, it's a disgusting merger of words that shouldn't exist.
-
The craziest thing to me is there was movements to advocate the creation of CP through AI to help those addicted to it as it "wasn't real" and there were no victims involved in it. But no comments regarding how the LLM gained the models to generate those images or the damages that will come when such things get normalized.
It just should never be normalized or exist.
Just for what it's worth, you don't need CSAM in the training material for a generative AI to produce CSAM. The models know what children look like, and what naked adults look like, so they can readily extrapolate from there.
The fact that you don't need to actually supply any real CSAM to the training material is the reasoning being offered for supporting AI CSAM. It's gross, but it's also hard to argue with. We allow for all types of illegal subjects to be presented in porn; incest, rape, murder, etc. While most mainstream sites won't allow those types of material, none of them are technically outlawed - partly because of freedom of speech and artistic expression and yadda yadda, but also partly because it all comes with the understanding that it's a fake, made-for-film production and that nobody involved had their consent violated, so it's okay because none of it was actually rape, incest, or murder. And if AI CSAM can be made without actually violating the consent of any real people, then what makes it different?
I don't know how I feel about it, myself. The idea of "ethically-sourced" CSAM doesn't exactly sit right with me, but if it's possible to make it in a truly victimless manner, then I find it hard to argue outright banning something just because I don't like it.
-
This post did not contain any content.
Who actually gets hurt in AI generated cp? The servers?
-
Who actually gets hurt in AI generated cp? The servers?
-
I'm no pedo, but what you do in your own home and hurts nobody is your own thing.
-
I'm no pedo, but what you do in your own home and hurts nobody is your own thing.
Yes, but how is the AI making the images or videos? It has to be trained on SOMETHING.
So, regardless of direct harm or not, harm is done at some point in the process and it needs to be stopped before it slips and gets worse because people "get used to" it.
-
Just for what it's worth, you don't need CSAM in the training material for a generative AI to produce CSAM. The models know what children look like, and what naked adults look like, so they can readily extrapolate from there.
The fact that you don't need to actually supply any real CSAM to the training material is the reasoning being offered for supporting AI CSAM. It's gross, but it's also hard to argue with. We allow for all types of illegal subjects to be presented in porn; incest, rape, murder, etc. While most mainstream sites won't allow those types of material, none of them are technically outlawed - partly because of freedom of speech and artistic expression and yadda yadda, but also partly because it all comes with the understanding that it's a fake, made-for-film production and that nobody involved had their consent violated, so it's okay because none of it was actually rape, incest, or murder. And if AI CSAM can be made without actually violating the consent of any real people, then what makes it different?
I don't know how I feel about it, myself. The idea of "ethically-sourced" CSAM doesn't exactly sit right with me, but if it's possible to make it in a truly victimless manner, then I find it hard to argue outright banning something just because I don't like it.
There is also the angle of generated CSAM looking real adding difficulty in prosecuting real CSAM producers.
-
Its a very difficult subject, both sides have merit. I can see the "CSAM created without abuse could be used in treatment/management of people with these horrible urges" but I can also see "Allowing people to create CSAM could normalise it and lead to more actual abuse".
Sadly its incredibly difficult for academics to study this subject and see which of those two is more prevalent.
-
Yes, but how is the AI making the images or videos? It has to be trained on SOMETHING.
So, regardless of direct harm or not, harm is done at some point in the process and it needs to be stopped before it slips and gets worse because people "get used to" it.
I wouldn't think it needs to have child porn in the training data to be able to generate it. It has porn as the data, it knows what kids look like, merge the two. I think that works for anything AI knows about, make this resemble this.
-
Yes, but how is the AI making the images or videos? It has to be trained on SOMETHING.
So, regardless of direct harm or not, harm is done at some point in the process and it needs to be stopped before it slips and gets worse because people "get used to" it.
Ai can combine two things. It can train on completely normal pictures of children, and it can train on completely normal porn, and then it can put those together.
This is the same reason it can do something like Godzilla with Sailor Moon's hair, not because it trained on images of Godzilla with Sailor Moon's hair, but because it can combine those two separate things.
-
Ai can combine two things. It can train on completely normal pictures of children, and it can train on completely normal porn, and then it can put those together.
This is the same reason it can do something like Godzilla with Sailor Moon's hair, not because it trained on images of Godzilla with Sailor Moon's hair, but because it can combine those two separate things.
Fair enough. I still think it shouldn't be allowed though.
-
I wouldn't think it needs to have child porn in the training data to be able to generate it. It has porn as the data, it knows what kids look like, merge the two. I think that works for anything AI knows about, make this resemble this.
That's fair, but I still think it shouldn't be accepted or allowed.
-
Yes, but how is the AI making the images or videos? It has to be trained on SOMETHING.
So, regardless of direct harm or not, harm is done at some point in the process and it needs to be stopped before it slips and gets worse because people "get used to" it.
needs to be stopped before it slips and gets worse because people "get used to" it.
Ah, right, almost finally forgot the killer games rhetoric.
-
needs to be stopped before it slips and gets worse because people "get used to" it.
Ah, right, almost finally forgot the killer games rhetoric.
I also don't agree with the killer games thing, but humans are very adaptable as a species.
Normally that's a good thing, but in a case like this exposure to something shocking or upsetting can make it less shocking or upsetting over time (obviously not in every case). So, if AI is being used for something like this and being reported on isn't it possible that people might slowly get desensitized to it over time?