What concrete steps can be taken to combat misinformation on social media? This problem is hardly an issue on this platform, but it certainly is elsewhere. Do you have any ideas or suggestions?
-
we can start demanding that social media moderators (or whatever passes for those in any given site) prevent misinformation as much as possible.
Yeah, but how are you expecting moderators to determine what is and isn't misinformation?
wrote last edited by [email protected]That's one of the many issues with expecting a collective resolution. Question is: why do people feel they need to be able to discuss issues way beyond their understanding and personal experience online with others who also don't know much about it? If actually done well, moderation is a full time job but nobody is interested in paying a bunch of online jannies to clean their space.
That's why I favor individual responsibility, and opting out of the possibility of being exposed to (or perpetuating) misinformation. Maybe in the future we can have forums for verified experts of a field, where regular people can have discussions with them and ask questions etc. But these would be moderated places where you do need to bring proof and sound arguments, not emotionally charged headlines.
The stories and information posted on social media are artistic works of fiction and falsehood. Only a fool would take anything posted as fact.
-
This problem is hardly an issue on this platform.
And this is the problem.
I see objectively misleading, clickbait headlines and articles from bad (eg not recommended by Wikipedia) sources float to the top of Lemmy all the time.
I call them out, but it seems mods are uninterested in enforcing more strict information hygiene.
Step 1 is teaching journalism and social media hygiene as a dedicated class in school, or on social media… And, well, the US is kinda past that being possible :/.
There might be hope for the rest of the world.
yeah, lemmy could stop pushing extreme leftist misinformation from mysterious online "news" sources and rewriting history
that would be a great start -
yeah, lemmy could stop pushing extreme leftist misinformation from mysterious online "news" sources and rewriting history
that would be a great startYeah, western right wing neoliberal misinformation only.
-
bad (eg not recommended by Wikipedia)
If you want to know why misinformation is so prominent, the fact that you think this is a good standard is a big part of it.
Step 1 is teaching journalism and social media hygiene as a dedicated class in school
And will those classes be teaching "Wikipedia is the indisputable rock of factuality, the holy Scripture from which truth flows"?
wrote last edited by [email protected]It’s not of course, but it’s a good start. Certainly good enough to use as a quick but fallible reference:
https://en.m.wikipedia.org/wiki/Wikipedia:Reliable_sources/Perennial_sources
As I heard someone else here quote, perfect is the enemy of good.
-
How are you going to counter misinformation if you can't determine what is and isn't misinformation?
What makes you think I couldn't tell the difference?
-
yeah, lemmy could stop pushing extreme leftist misinformation from mysterious online "news" sources and rewriting history
that would be a great startThat’s not what I meant. It’s true that too many left leaning tabloids get upvoted to the front page, but the direction of the slant isn’t the point, and there’s nothing “mysterious” about them. They’re clickbait/ragebait.
-
What makes you think I couldn't tell the difference?
The fact that you said you're concerned with verifying information
-
It’s not of course, but it’s a good start. Certainly good enough to use as a quick but fallible reference:
https://en.m.wikipedia.org/wiki/Wikipedia:Reliable_sources/Perennial_sources
As I heard someone else here quote, perfect is the enemy of good.
wrote last edited by [email protected]It’s not of course, but it’s a good start. Certainly good enough to use as a quick but fallible reference:
No, it really isn't. The fact that Wikipedia has been arbitrarily vested with such supreme authority to be the default source of truth by so many people is a big part of why misinformation is so common. Back in my day, even high schoolers were taught not to do that.
-
The fact that you said you're concerned with verifying information
What I meant was that my question wasn't about how to distinguish between reputable and unreliable sources – I think most Lemmy users are capable of doing that.
I was more interested in how we can effectively and meaningfully contribute to countering the flood of misinformation on social media (such as Twitter or meta apps).
The background to my question is the fact that this misinformation influences users' opinions. I think, the US is the best example of where that can lead. Unfortunately, there are similar trends in my home country. Since I don't want to be ruled by fascists, I thought I'd ask the community here what can be done.
But apparently I didn't phrase the question very well.
-
It’s not of course, but it’s a good start. Certainly good enough to use as a quick but fallible reference:
No, it really isn't. The fact that Wikipedia has been arbitrarily vested with such supreme authority to be the default source of truth by so many people is a big part of why misinformation is so common. Back in my day, even high schoolers were taught not to do that.
wrote last edited by [email protected]Yes, I remember too. We were specifically told not to use Wikipedia.
Then information hygiene went to shit. Now it's a rare oasis in the current landscape.
Look, I'm not saying to start referencing Wikipedia in scholarly journals or papers. But it's more accessible than some JSTOR database and way above average, and more of the population using it would be a wonderful thing. The vast majority of the time, Wikipedia is not the source of misinformation/disinformation in this world.
-
What I meant was that my question wasn't about how to distinguish between reputable and unreliable sources – I think most Lemmy users are capable of doing that.
I was more interested in how we can effectively and meaningfully contribute to countering the flood of misinformation on social media (such as Twitter or meta apps).
The background to my question is the fact that this misinformation influences users' opinions. I think, the US is the best example of where that can lead. Unfortunately, there are similar trends in my home country. Since I don't want to be ruled by fascists, I thought I'd ask the community here what can be done.
But apparently I didn't phrase the question very well.
What I meant was that my question wasn’t about how to distinguish between reputable and unreliable sources – I think most Lemmy users are capable of doing that.
Well that makes one of us. My experience is that most Lemmy users think Wikipedia was written by God himself.
-
Yes, I remember too. We were specifically told not to use Wikipedia.
Then information hygiene went to shit. Now it's a rare oasis in the current landscape.
Look, I'm not saying to start referencing Wikipedia in scholarly journals or papers. But it's more accessible than some JSTOR database and way above average, and more of the population using it would be a wonderful thing. The vast majority of the time, Wikipedia is not the source of misinformation/disinformation in this world.
Then information hygiene went to shit. Now it’s a rare oasis in the current landscape.
It went to shit because people started treating low quality sources like Wikipedia as "a rare oasis".
The vast majority of the time, Wikipedia is not the source of misinformation/disinformation in this world.
Are you sure about that?
-
Then information hygiene went to shit. Now it’s a rare oasis in the current landscape.
It went to shit because people started treating low quality sources like Wikipedia as "a rare oasis".
The vast majority of the time, Wikipedia is not the source of misinformation/disinformation in this world.
Are you sure about that?
wrote last edited by [email protected]...You're kidding, right?
I'm looking around the information landscape around me, and Wikipedia is not even in the top 1000 of disinformation peddlers. They make mistakes, but they aren't literally lying and propagandizing millions of people on purpose.
-
...You're kidding, right?
I'm looking around the information landscape around me, and Wikipedia is not even in the top 1000 of disinformation peddlers. They make mistakes, but they aren't literally lying and propagandizing millions of people on purpose.
and Wikipedia is not even in the top 1000 of disinformation peddlers.
And you determined this how?
They make mistakes, but they aren’t literally lying and propagandizing millions of people on purpose.
And you determined this how?
-
step 1. misinformation is a problem on every platform. full stop.
I think what you mean is maliciously manufactured information. still, I believe Lemmy is subject to it.
I believe that both types can be effectively dispatched by effectively moderating the community, but not in the sense that you might be thinking.
I believe that we are looking at community moderation from the wrong direction. today, the goal of the mod is to prune and remove undesired content and users. this creates high overhead and operational costs. it also increases chances for corruption and community instability. look no further than Reddit and lemmy for this where we have a handful of mods that are in-charge of multiple communities. who put them there? how do you remove them should they no longer have the communities best interests in mind? what power do I have as a user to bring attention to corruption?
I believe that if we flip the role of moderators to be instead guardians of what the community accepts instead of what they can see it greatly reduces the strain on mods and increases community involvement.
we already use a mechanism of up/down vote. should content hit a threshold below community standards, it's removed from view. should that user continue to receive below par results from inside the community, they are silenced. these par grades are rolling, so they would be able to interact within the community again after some time but continued abuse of the community could result in permanent silencing. should a user be unjustly silenced due to abuse, mod intervention is necessary. this would then flag the downvoters for abuse demerits and once a demerit threshold is hit, are silenced.
notice I keep saying silenced instead of blocked? that's because we shouldn't block their access to content or the community or even let them know nobody is seeing their content. in the case of malicious users/bots. the more time wasted on screaming into a void the less time wasted on corrupting another community. in-fact, I propose we allow these silenced users to interact with each other where they can continue to toxify and abuse each other in a spiraling chain of abuse that eventually results in their permanent silencing. all the while, the community governs itself and the users hum along unaware of what's going on in the background.
IMO it's up to the community to decide what is and isn't acceptable and mods are simply users within that community and are mechanisms to ensure voting abuse is kept in check.
Great idea but tough to keep people from gaming it
-
Great idea but tough to keep people from gaming it
genuinely curious of how would they game it?
of course there's a way to game it, but I think it's a far better solution than what social media platforms are doing currently and gives more options than figuratively amputate parts of community to save itself.
-
genuinely curious of how would they game it?
of course there's a way to game it, but I think it's a far better solution than what social media platforms are doing currently and gives more options than figuratively amputate parts of community to save itself.
If I need 10 downvotes to make you disappear then I only need 10 Smurf accounts.
At the same time, 10 might be a large portion of some communities while miniscule in others.
I suppose you limit votes to those in the specific community, but then you’d have to track their activity to see if they’re real or just griefing, and track activity in relation to others to see if they’re independent or all grief together. And moderators would need tools to not only discover but to manage briefing, to configure sensitivity
-
Could the lemmings be referring to the old trope where some loudmouth (usually a conservative) bangs on about an issue with some minority group ad nauseum and then some time later it turns out they were actually a perpetrator of the thing they banged on about, ie every accusation is an admission of guilt?
wrote last edited by [email protected]In this case, no - comments in these usually directly infer that he's saying that the rigged election was his team. There's no mistaking it. They aren't pointing at the "every accusation is a confession" bit that conservatives usually do, but many of them have commented things like "this is a direct confession, jail him now!" sadly, unironically.
While I agree the 2024 election definitely had fraud, and they're further attempting to now outright rig the midterms, the particular video I'm referring to wasn't the direct confession that some of these morons think it is.
And the problem also resides in the fact that this is only a single example....of many....
-
This post did not contain any content.
It's a pretty regulaely a big problem here.
But to answer your question, just check sources, verify with a second outlet, and call it out when you see it. That's all you can do on an individual level.
-
If I need 10 downvotes to make you disappear then I only need 10 Smurf accounts.
At the same time, 10 might be a large portion of some communities while miniscule in others.
I suppose you limit votes to those in the specific community, but then you’d have to track their activity to see if they’re real or just griefing, and track activity in relation to others to see if they’re independent or all grief together. And moderators would need tools to not only discover but to manage briefing, to configure sensitivity
you're right. the threshold is entirely dependent on the size of the community. it would probably be derived from some part of community subscribers and user interactions for the week/month.
should a comment be overwhelmingly positive that would offset the threshold further.
in regards to griefing, if a comment or post is overwhelmingly upvoted and hits the downvote threshold that's when mods step in to investigate and make a decision. if it's found to not break rules or is beneficial to the community all downvoters are issued a demerit. after so many demerits those users are silenced in the community and follow through typical "cool down" processes or are permanently silenced for continued abuse.
the same could be done for the flip-side where comments are upvote skewed.
in this way, the community content is curated by the community and nurtured by the mods.
appeals could be implemented for users whom have been silenced and fell through the cracks, and further action could be taken against mods that routinely abuse or game the system by the admins.
I think it would also be beneficial to remove the concept of usernames from content. they would still exist for administrative purposes and to identify problem users, but I think communities would benefit from the "double blind" test. there's been plenty of times I have been downvoted just because of a previous interaction. also the same, I have upvoted because of a well known user or previous interaction with that user.
it's important to note this would change the psychological point of upvote and downvotes. currently they're used in more of an "I agree with" or "I cannot accept that". using the rules I've brought up would require users to understand they have just as much to risk for upvoting or downvoting content. so when a user casts their vote, they truly believe it's in the interests of the community at large and they want that kind of content within the community. to downvote means they think the content doesn't meet the criteria for the community. should users continue to arbitrarily upvote or downvote based on their personal preferences instead of community based objectivity, they might find themselves silenced from the community.
it's based on the principles of "what is good for society is good for me" and silences anyone in the community that doesn't meet the standards of that community.
for example, a community that is strictly for women wouldn't need to block men. as soon as a man would self identify or share ideas that aren't respondent to the community they would be silenced pretty quickly. some women might even be silenced but they would undoubtedly have shared ideas that were rejected by the community at large. this mimics the self-regulation that society has used for thousands of years IMO.
I think we need to stop looking at social networks as platforms for the individuals and look at them as platforms for the community as a whole. that's really the only way we can block toxicity and misinformation from our communities. undoubtedly it will create echo chambers