Bluesky Deletes AI Protest Video of Trump Sucking Musk's Toes, Calls It 'Non-Consensual Explicit Material'
-
If you can explain the existence of wikipedia under your theory then I'll listen to you, but like... wow. Profit motive, what a joke. That's literally what causes enshittification.
existence of wikipedia
They got the ease of use down, largely due to it being a centralized service. You can literally go there, click edit, and submit a change, and you can also make an account if you want credit. It was also largely the first of its kind, so it was easy for people to get passionate about it. I made a bunch of edits in the relative early days (2000s), because I thought it was really cool. I do the same for OpenStreetMaps today, because it has a good amount of info, but it still needs some data entry here and there (I use Organic Maps on mobile).
That said, projects like Wikipedia aren't very common. It started around the time the dot-com bubble burst, so they had a fair amount of cash to kick things off with, and it got traction before the money ran out. They were able to reuse a lot of what they learned from another commercial project, and the community project ended up eating the original project's lunch.
I'm not arguing that profit is required for something to succeed, I'm merely arguing that money really helps a project get off the ground, and if there are multiple competing projects, the one with better marketing and a smoother user experience will usually win.
I didn't say profit guarantees projects live a long time or anything of that nature, I merely said users tend to flock to platforms that have a strong profit motive, probably because they have better marketing and funding for a better UX. First impressions matter a lot when it comes to a commercial product, so they tend to do a good job at that. That's why BlueSky is more attractive than Mastodon, and why whatever comes next will also likely be more attractive than Mastodon.
-
existence of wikipedia
They got the ease of use down, largely due to it being a centralized service. You can literally go there, click edit, and submit a change, and you can also make an account if you want credit. It was also largely the first of its kind, so it was easy for people to get passionate about it. I made a bunch of edits in the relative early days (2000s), because I thought it was really cool. I do the same for OpenStreetMaps today, because it has a good amount of info, but it still needs some data entry here and there (I use Organic Maps on mobile).
That said, projects like Wikipedia aren't very common. It started around the time the dot-com bubble burst, so they had a fair amount of cash to kick things off with, and it got traction before the money ran out. They were able to reuse a lot of what they learned from another commercial project, and the community project ended up eating the original project's lunch.
I'm not arguing that profit is required for something to succeed, I'm merely arguing that money really helps a project get off the ground, and if there are multiple competing projects, the one with better marketing and a smoother user experience will usually win.
I didn't say profit guarantees projects live a long time or anything of that nature, I merely said users tend to flock to platforms that have a strong profit motive, probably because they have better marketing and funding for a better UX. First impressions matter a lot when it comes to a commercial product, so they tend to do a good job at that. That's why BlueSky is more attractive than Mastodon, and why whatever comes next will also likely be more attractive than Mastodon.
It's just really weird that you turn to profit motive as a benefit when we're talking about systems that tend to enshittify, and that's like, the main thing that makes them enshittify.
My argument is about how enshittification destroys platforms, and platforms that don't do that will retain their growth. Bluesky has all the ingredients to enshittify, mastodon doesn't.
Yes they need to work on their onboarding, but unlike bluesky, they can keep going at it till it sticks. Centralised platforms get a launch, and a lifecycle, and then they tend to go away.
Quite literally the opposite of what you said. If a platform is central, it can be switched off tomorrow. Nobody can do that to the fediverse as long as the internet exists. The idea that hobbyists are somehow less reliable than fucking corporations is also absurd. Have you met corporations?
This is literally a tortoise-and-the-hare situation.
-
It's just really weird that you turn to profit motive as a benefit when we're talking about systems that tend to enshittify, and that's like, the main thing that makes them enshittify.
My argument is about how enshittification destroys platforms, and platforms that don't do that will retain their growth. Bluesky has all the ingredients to enshittify, mastodon doesn't.
Yes they need to work on their onboarding, but unlike bluesky, they can keep going at it till it sticks. Centralised platforms get a launch, and a lifecycle, and then they tend to go away.
Quite literally the opposite of what you said. If a platform is central, it can be switched off tomorrow. Nobody can do that to the fediverse as long as the internet exists. The idea that hobbyists are somehow less reliable than fucking corporations is also absurd. Have you met corporations?
This is literally a tortoise-and-the-hare situation.
It’s just really weird that you turn to profit motive as a benefit
Why? That's pretty much the common thread in successful SM apps vs unsuccessful SM apps. The ones w/ profit motive attract investors, which means better marketing and initial rollout, which leads to more users.
I'm not saying it's good or bad, just that it's effective.
destroys platforms
What's the benefit you're trying to get out of platforms?
Mastodon will probably stumble along in some form for a long time, but servers will come and go, meaning content will come and go. The same is true for Lemmy, many of the bigger servers will likely go away in 10-20 years, if not sooner, as the admins get tired of hosting them (it's pretty expensive). The platform will likely continue to exist, but you'll probably need to jump between servers every so often.
I guess I don't see that as hugely different from jumping from Twitter to BlueSky. Twitter had a good run, and maybe BlueSky will have a similar run.
Nobody can do that to the fediverse as long as the internet exists.
Maybe the entirety of the fediverse won't die, but significant portions will disappear from time to time as servers drop out and new ones join.
I really don't see a case for the Fediverse "winning" in any meaningful sense. The reason Wikipedia succeeded is because it has permanence. The Fediverse lacks that, so why wouldn't people just jump to the flavor of the week instead? You know, the flashy new thing that uses the latest designs and has some interesting gimmick.
I think the Fediverse will always be playing catch-up. Development is relatively slow, and it has proven to be less capable of taking advantage of opportunities than BlueSky. Why? Because BlueSky is swimming in money, whereas Mastodon, Lemmy, et al are hobby projects. Hobby projects work well in some areas where they form a foundation (e.g. Linux), but they don't work as well at chasing fads. Why isn't there a popular alternative to Snapchat, TikTok, or other "flavors of the week"? Because FOSS moves slowly, and will never keep up with the fads in SM.
So my issues with the Fediverse are:
- data is unlikely to be permanent
- development is slow
- hosting is somewhat expensive (~$150/month for my instance, which I think is low and doesn't include labor); not sure what Mastodon costs
- not very discoverable - SEO is almost nonexistent
- UX is a bit... lacking... compared to commercial alternatives
I'm not saying it's bad, I'm just saying it's an uphill battle with a fair amount of caveats.
-
Ah, the rewards of moderation: the best move is not to play.
Fuck it is & has always been a better answer.
Anarchy of the early internet was better than letting some paternalistic authority decide the right images & words to allow us to see, and decentralization isn't a bad idea.Yet the forward-thinking people of today know better and insist that with their brave, new moderation they'll paternalize better without stopping to acknowledge how horribly broken, arbitrary, & fallible that entire approach is.
Instead of learning what we already knew, social media keeps repeating the same dumb mistakes.Elon acts like a new Reddit mod drunk on power. He is the guy screaming in the comments that he knows how to run a forum better and seized the chance, and now he cannot fathom why people hate him.
-
I'll just explain why that is a horrible idea with three simple letters:
A. O. C.
I am standing on the wire
what is the problem with satire and AOC (whatever that is)?
-
I miss the early days of the internet when it was still a wild west.
Something like I hate you myg0t 2 or Pico's School would have gotten the creators cancelled if released in 2025.
Note on the term canceling. Independent creators cannot, by definition, get canceled. Unless you literally are under a production or publishing contract that gets actually canceled due to something you said or did, you were not canceled. Being unpopular is not getting canceled, neither is receiving public outrage due to being bad or unpopular. Even in a figurative sense, just the fact that the videos were published to YouTube and can still be viewed means they were not canceled. They just fell out of the zeitgeist and aren't popular anymore, that happens to 99% of entertainment content.
-
Ah, the rewards of moderation: the best move is not to play.
Fuck it is & has always been a better answer.
Anarchy of the early internet was better than letting some paternalistic authority decide the right images & words to allow us to see, and decentralization isn't a bad idea.Yet the forward-thinking people of today know better and insist that with their brave, new moderation they'll paternalize better without stopping to acknowledge how horribly broken, arbitrary, & fallible that entire approach is.
Instead of learning what we already knew, social media keeps repeating the same dumb mistakes.I think there's a huge difference between fighting bullying or hate speech against minorities. Another thing is making fun of very specific and very public people.
-
Ah, the rewards of moderation: the best move is not to play.
Fuck it is & has always been a better answer.
Anarchy of the early internet was better than letting some paternalistic authority decide the right images & words to allow us to see, and decentralization isn't a bad idea.Yet the forward-thinking people of today know better and insist that with their brave, new moderation they'll paternalize better without stopping to acknowledge how horribly broken, arbitrary, & fallible that entire approach is.
Instead of learning what we already knew, social media keeps repeating the same dumb mistakes.You clearly never were the victim back in those days. Neither do you realize this approach doesn't work on the modern web even in the slightest, unless you want the basics of both enlightenment and therefore science and democracy crumbling down even faster.
Anarchism is never an answer, it's usually willful ignorance about there being any problems.
-
Yes, exactly this. Like something might be technically better but unless it's doing its main job of actually connecting people it's not going to work.
I wish more FOSS nerds understood this.
Many FOSS nerds don't even understand the necessity of a user-friendly GUI…
-
because there is zero marketing for mastodon. zero sex-appeal to mastodon.
bluesky was a better car salesman selling the same old car twitter had.
The sad truth is that the vast majority of people WANT an algorithm to tell them what they like.
Mastodon requires you to actually have your own opinions going in, and follow material based on that.
-
You do remember snuff and goatse and csam of the early internet, I hope.
Even with that of course it was better, because that stuff still floats around, and small groups of enjoyers easily find ways to share it over mainstream platforms.
I'm not even talking about big groups of enjoyers, ISIS (rebranded sometimes), Turkey, Azerbaijan, Israel, Myanma's regime, cartels and everyone share what they want of snuff genre, and it holds long enough.
In text communication their points of view are also less likely to be banned or suppressed than mine.
So yes.
Yet the forward-thinking people of today know better and insist that with their brave, new moderation they’ll paternalize better
They don't think so, just use the opportunity to do this stuff in area where immunity against it is not yet established.
There are very few stupid people in positions of power, competition is a bitch.
I'm weirded out when people say they want zero moderation. I really don't want to see any more beheading or CSAM and moderation can prevent that.
-
That's a really thin line. I have a hard time imagining anyone sticking to this same argument if the satire were directed towards someone they admired in a similar position of power. The prime minister visiting a school is a world away from AI generated content of something that never actually happened. Leaving nuance out of these policies isn't some corporation pulling wool over our eyes, it's just really hard to do nuance at scale without bias and commotion.
Yeah I really don't like that this is probably going to end up being used to argue that deepfake porn of public figures is ok as long as it is "satire".
I don't really care about the Trump x Musk one but I know for a fact that this will lead to MAGAs doing the same shit to AOC and any other prominent woman on the democrat side.
-
I'm weirded out when people say they want zero moderation. I really don't want to see any more beheading or CSAM and moderation can prevent that.
Moderation should be optional .
Say, a message may have any amount of "moderating authority" verdicts, where a user might set up whether they see only messages vetted by authority A, only by authority B, only by A logical-or B, or all messages not blacklisted by authority A, and plenty of other variants, say, we trust authority C unless authority F thinks otherwise, because we trust authority F to know things C is trying to reduce in visibility.
Filtering and censorship are two different tasks. We don't need censorship to avoid seeing CSAM. Filtering is enough.
This fallacy is very easy to encounter, people justify by their unwillingness to encounter something the need to censor it for everyone as if that were not solvable. They also refuse to see that's technically solvable. Such a "verdict" from moderation authority, by the way, is as hard to do as an upvote or a downvote.
For a human or even a group of humans it's hard to pre-moderate every post in a period of time, but that's solvable too - by putting, yes, an AI classifier before humans and making humans check only uncertain cases (or certain ones someone complained about, or certain ones another good moderation authority flagged the opposite, you get the idea).
I like that subject, I think it's very important for the Web to have a good future.
-
Moderation should be optional .
Say, a message may have any amount of "moderating authority" verdicts, where a user might set up whether they see only messages vetted by authority A, only by authority B, only by A logical-or B, or all messages not blacklisted by authority A, and plenty of other variants, say, we trust authority C unless authority F thinks otherwise, because we trust authority F to know things C is trying to reduce in visibility.
Filtering and censorship are two different tasks. We don't need censorship to avoid seeing CSAM. Filtering is enough.
This fallacy is very easy to encounter, people justify by their unwillingness to encounter something the need to censor it for everyone as if that were not solvable. They also refuse to see that's technically solvable. Such a "verdict" from moderation authority, by the way, is as hard to do as an upvote or a downvote.
For a human or even a group of humans it's hard to pre-moderate every post in a period of time, but that's solvable too - by putting, yes, an AI classifier before humans and making humans check only uncertain cases (or certain ones someone complained about, or certain ones another good moderation authority flagged the opposite, you get the idea).
I like that subject, I think it's very important for the Web to have a good future.
people justify by their unwillingness to encounter something the need to censor it for everyone...
I can't engage in good faith with someone who says this about CSAM.
Filtering and censorship are two different tasks. We don’t need censorship to avoid seeing CSAM. Filtering is enough.
No it is not. People are not tagging their shit properly when it is illegal.
-
people justify by their unwillingness to encounter something the need to censor it for everyone...
I can't engage in good faith with someone who says this about CSAM.
Filtering and censorship are two different tasks. We don’t need censorship to avoid seeing CSAM. Filtering is enough.
No it is not. People are not tagging their shit properly when it is illegal.
I can't engage in good faith
Right, you can't.
If someone posts CSAM, police should get their butts to that someone's place.
No it is not. People are not tagging their shit properly when it is illegal.
What I described doesn't have anything to do with people tagging what they post. It's about users choosing the logic of interpreting moderation decisions. But I've described it very clearly in the previous comment, so please read it or leave the thread.
-
Yeah I really don't like that this is probably going to end up being used to argue that deepfake porn of public figures is ok as long as it is "satire".
I don't really care about the Trump x Musk one but I know for a fact that this will lead to MAGAs doing the same shit to AOC and any other prominent woman on the democrat side.
And that would be okay
-
Bluesky will become just the same az elonx...
It already is
-
I am standing on the wire
what is the problem with satire and AOC (whatever that is)?
The problem is the combination of AOC and nonconsentual explicit AI content. Overly broad rules might make that fall under satire, which is why caution is advised when devising such rules.
-
Bluesky deleted a viral, AI-generated protest video in which Donald Trump is sucking on Elon Musk’s toes because its moderators said it was “non-consensual explicit material.” The video was broadcast on televisions inside the office Housing and Urban Development earlier this week, and quickly went viral on Bluesky and Twitter.
Independent journalist Marisa Kabas obtained a video from a government employee and posted it on Bluesky, where it went viral. Tuesday night, Bluesky moderators deleted the video because they said it was “non-consensual explicit material.”
Other Bluesky users said that versions of the video they uploaded were also deleted, though it is still possible to find the video on the platform.
Technically speaking, the AI video of Trump sucking Musk’s toes, which had the words “LONG LIVE THE REAL KING” shown on top of it, is a nonconsensual AI-generated video, because Trump and Musk did not agree to it. But social media platform content moderation policies have always had carve outs that allow for the criticism of powerful people, especially the world’s richest man and the literal president of the United States.
For example, we once obtained Facebook’s internal rules about sexual content for content moderators, which included broad carveouts to allow for sexual content that criticized public figures and politicians. The First Amendment, which does not apply to social media companies but is relevant considering that Bluesky told Kabas she could not use the platform to “break the law,” has essentially unlimited protection for criticizing public figures in the way this video is doing.
Content moderation has been one of Bluesky’s growing pains over the last few months. The platform has millions of users but only a few dozen employees, meaning that perfect content moderation is impossible, and a lot of it necessarily needs to be automated. This is going to lead to mistakes. But the video Kabas posted was one of the most popular posts on the platform earlier this week and resulted in a national conversation about the protest. Deleting it—whether accidentally or because its moderation rules are so strict as to not allow for this type of reporting on a protest against the President of the United States—is a problem.
I seem to be in the minority here, but I am extremely uncomfortable the idea of non-consensual AI porn of anyone. Even people I despise. It’s so unethical that it just disgusts me. I understand why there are exceptions for those in positions of power, but I’d be more than happy to live in a world where there weren’t.
-
I seem to be in the minority here, but I am extremely uncomfortable the idea of non-consensual AI porn of anyone. Even people I despise. It’s so unethical that it just disgusts me. I understand why there are exceptions for those in positions of power, but I’d be more than happy to live in a world where there weren’t.
Where do you draw the line for the rich fucks of the world? Realistic CGI? Realistic drawings? Edited photos?