What concrete steps can be taken to combat misinformation on social media? This problem is hardly an issue on this platform, but it certainly is elsewhere. Do you have any ideas or suggestions?
-
Then information hygiene went to shit. Now it’s a rare oasis in the current landscape.
It went to shit because people started treating low quality sources like Wikipedia as "a rare oasis".
The vast majority of the time, Wikipedia is not the source of misinformation/disinformation in this world.
Are you sure about that?
wrote last edited by [email protected]...You're kidding, right?
I'm looking around the information landscape around me, and Wikipedia is not even in the top 1000 of disinformation peddlers. They make mistakes, but they aren't literally lying and propagandizing millions of people on purpose.
-
...You're kidding, right?
I'm looking around the information landscape around me, and Wikipedia is not even in the top 1000 of disinformation peddlers. They make mistakes, but they aren't literally lying and propagandizing millions of people on purpose.
and Wikipedia is not even in the top 1000 of disinformation peddlers.
And you determined this how?
They make mistakes, but they aren’t literally lying and propagandizing millions of people on purpose.
And you determined this how?
-
step 1. misinformation is a problem on every platform. full stop.
I think what you mean is maliciously manufactured information. still, I believe Lemmy is subject to it.
I believe that both types can be effectively dispatched by effectively moderating the community, but not in the sense that you might be thinking.
I believe that we are looking at community moderation from the wrong direction. today, the goal of the mod is to prune and remove undesired content and users. this creates high overhead and operational costs. it also increases chances for corruption and community instability. look no further than Reddit and lemmy for this where we have a handful of mods that are in-charge of multiple communities. who put them there? how do you remove them should they no longer have the communities best interests in mind? what power do I have as a user to bring attention to corruption?
I believe that if we flip the role of moderators to be instead guardians of what the community accepts instead of what they can see it greatly reduces the strain on mods and increases community involvement.
we already use a mechanism of up/down vote. should content hit a threshold below community standards, it's removed from view. should that user continue to receive below par results from inside the community, they are silenced. these par grades are rolling, so they would be able to interact within the community again after some time but continued abuse of the community could result in permanent silencing. should a user be unjustly silenced due to abuse, mod intervention is necessary. this would then flag the downvoters for abuse demerits and once a demerit threshold is hit, are silenced.
notice I keep saying silenced instead of blocked? that's because we shouldn't block their access to content or the community or even let them know nobody is seeing their content. in the case of malicious users/bots. the more time wasted on screaming into a void the less time wasted on corrupting another community. in-fact, I propose we allow these silenced users to interact with each other where they can continue to toxify and abuse each other in a spiraling chain of abuse that eventually results in their permanent silencing. all the while, the community governs itself and the users hum along unaware of what's going on in the background.
IMO it's up to the community to decide what is and isn't acceptable and mods are simply users within that community and are mechanisms to ensure voting abuse is kept in check.
Great idea but tough to keep people from gaming it
-
Great idea but tough to keep people from gaming it
genuinely curious of how would they game it?
of course there's a way to game it, but I think it's a far better solution than what social media platforms are doing currently and gives more options than figuratively amputate parts of community to save itself.
-
genuinely curious of how would they game it?
of course there's a way to game it, but I think it's a far better solution than what social media platforms are doing currently and gives more options than figuratively amputate parts of community to save itself.
If I need 10 downvotes to make you disappear then I only need 10 Smurf accounts.
At the same time, 10 might be a large portion of some communities while miniscule in others.
I suppose you limit votes to those in the specific community, but then you’d have to track their activity to see if they’re real or just griefing, and track activity in relation to others to see if they’re independent or all grief together. And moderators would need tools to not only discover but to manage briefing, to configure sensitivity
-
Could the lemmings be referring to the old trope where some loudmouth (usually a conservative) bangs on about an issue with some minority group ad nauseum and then some time later it turns out they were actually a perpetrator of the thing they banged on about, ie every accusation is an admission of guilt?
wrote last edited by [email protected]In this case, no - comments in these usually directly infer that he's saying that the rigged election was his team. There's no mistaking it. They aren't pointing at the "every accusation is a confession" bit that conservatives usually do, but many of them have commented things like "this is a direct confession, jail him now!" sadly, unironically.
While I agree the 2024 election definitely had fraud, and they're further attempting to now outright rig the midterms, the particular video I'm referring to wasn't the direct confession that some of these morons think it is.
And the problem also resides in the fact that this is only a single example....of many....
-
This post did not contain any content.
It's a pretty regulaely a big problem here.
But to answer your question, just check sources, verify with a second outlet, and call it out when you see it. That's all you can do on an individual level.
-
If I need 10 downvotes to make you disappear then I only need 10 Smurf accounts.
At the same time, 10 might be a large portion of some communities while miniscule in others.
I suppose you limit votes to those in the specific community, but then you’d have to track their activity to see if they’re real or just griefing, and track activity in relation to others to see if they’re independent or all grief together. And moderators would need tools to not only discover but to manage briefing, to configure sensitivity
you're right. the threshold is entirely dependent on the size of the community. it would probably be derived from some part of community subscribers and user interactions for the week/month.
should a comment be overwhelmingly positive that would offset the threshold further.
in regards to griefing, if a comment or post is overwhelmingly upvoted and hits the downvote threshold that's when mods step in to investigate and make a decision. if it's found to not break rules or is beneficial to the community all downvoters are issued a demerit. after so many demerits those users are silenced in the community and follow through typical "cool down" processes or are permanently silenced for continued abuse.
the same could be done for the flip-side where comments are upvote skewed.
in this way, the community content is curated by the community and nurtured by the mods.
appeals could be implemented for users whom have been silenced and fell through the cracks, and further action could be taken against mods that routinely abuse or game the system by the admins.
I think it would also be beneficial to remove the concept of usernames from content. they would still exist for administrative purposes and to identify problem users, but I think communities would benefit from the "double blind" test. there's been plenty of times I have been downvoted just because of a previous interaction. also the same, I have upvoted because of a well known user or previous interaction with that user.
it's important to note this would change the psychological point of upvote and downvotes. currently they're used in more of an "I agree with" or "I cannot accept that". using the rules I've brought up would require users to understand they have just as much to risk for upvoting or downvoting content. so when a user casts their vote, they truly believe it's in the interests of the community at large and they want that kind of content within the community. to downvote means they think the content doesn't meet the criteria for the community. should users continue to arbitrarily upvote or downvote based on their personal preferences instead of community based objectivity, they might find themselves silenced from the community.
it's based on the principles of "what is good for society is good for me" and silences anyone in the community that doesn't meet the standards of that community.
for example, a community that is strictly for women wouldn't need to block men. as soon as a man would self identify or share ideas that aren't respondent to the community they would be silenced pretty quickly. some women might even be silenced but they would undoubtedly have shared ideas that were rejected by the community at large. this mimics the self-regulation that society has used for thousands of years IMO.
I think we need to stop looking at social networks as platforms for the individuals and look at them as platforms for the community as a whole. that's really the only way we can block toxicity and misinformation from our communities. undoubtedly it will create echo chambers
-
Lol misinformation is still an issue on Lemmy, don't kid yourself
Wait, you mean Stalin wasnt a cuddly teddy bear?
-
Note that Wikipedia is not a proper source.
Ruh roh. Better inform the mods over at /c/wikipedia
-
impossible, when the platform itself is the one enabling or promoting, google/youtube, meta all allows it and encourages because its more advertisement money, plus it shores up male/right wing voters which will benefit the companies in the long run in the form of low/non-existent taxes plus tax havens, they think long term. "left leaning"(that is not annoying tankie rhetoric) content is almost universally quashed or heavily astroturfed on most SOCIAL media.
Reddit is getting there. Only way is to host your own forum ,and have controls, probably some form automation to block trolls spammers.
the users should be cognizant what is being said and fact checking themselves to prevent themselves from being drawn into disinformation/misinformation.male/right wing
Hahaha. "male slash right-wing" what are you on about
-
This post did not contain any content.
Misinformation is part of the nature of social media and can't be fixed. Stupid people are stupid. There are A LOT of them on social media. The dishonest take advantage of the stupid to spread misinformation. The only way to counteract it is to have gatekeeping, which will crush the user count and block out the biggest users, and network effect will funnel most of the rest into the biggest. (i.e. the one with the most lenient gatekeeping)
The only hope is that people realize how stupid, unrepresentative, and unsuitable social media discourse is. It's a place to find funny pictures of cats and boobs. Looking to it for anything serious, or pretending what you see there is representative of anything, is pointless at best and likely harmful.
-
This post did not contain any content.wrote last edited by [email protected]
What concrete steps can be taken to combat misinformation on social media? […]
Regarding my own content: I do my best to cite any claim that I make, no matter how trivial. If I make a statement for which I lack confidence in its veracity, I do my best to convey that uncertainty. I do my best to convey explicitly whether a statement is a joke, or sarcasm.
Fundamentally, my approach to this issue is based on this quote:
Rationality is not a character trait, it's a process. If you fool yourself into believing that you're rational by default, you open yourself up to the most irrational thinking. ^[1]^
Regarding the content of others: If I come across something that I believe to be false, I try to politely respond to it with a sufficiently and honestly cited statement explaining why I think it is false. If I come across something of unknown veracity/clarity, I try to politely challenge the individual responsible to clarify their intent/meaning.
For clarity, I have no evidence to support that what I'm doing is an effective means to this end, but I want to believe that it's helping in at least some small way.
::: spoiler References
- Type: Comment. Author: "@The8BitPianist". Publisher: [Type: Post (Video). Title: "On These Questions, Smarter People Do Worse". Author: "Veritasium" ("@veritasium"). Publisher: YouTube. Published: 2024-11-04T16:48:03Z. URI: https://www.youtube.com/watch?v=zB_OApdxcno.]. Published: 2024-11-04T09:06:26Z. Accessed: 2025-03-29T07:48Z. URI: https://www.youtube.com/watch?v=zB_OApdxcno&lc=Ugy6vV7Z3EeFHkdfbHl4AaABAg.
:::What concrete steps can be taken to combat misinformation on social media?
-
This post did not contain any content.wrote last edited by [email protected]
[misinformation] is hardly an issue on this platform […]
In my opinion, that statement of yours is, ironically, responsible for why there may be an issue with misinformation. You state it with certainty, yet you provide no source to back up your claim. It is my belief that this sort of conjecture is at the source of misinformation issues.
-
I look at any individual's history when they post anything sketchy and contextualize. Anything politically motivated is likely a shill unless they have a long broadly engaged post history across many subjects with depth. I block a lot of people too.
I look at any individual’s history when they post anything sketchy and contextualize. […]
I am concerned that this would distill down to argumentum ad hominem.
-
If we want to go the route of the Responsibility of the Individual:
Resolve to not get your political etc. news from social media. Draw a line for yourself: cool to get gaming news from random influencers online? Probably. News about global events? At this point might be better for most people's mental health to ignore them and focus more locally. However, read how to read a book, make your best effort at finding a reputable news organization and check those for news if you must have them. On same vein, if you don't read at least some article about an event being discussed on social media, DON'T COMMENT. Don't engage with that post. If it really grabs at you, go find an article about it from a trusted source, and depending on how much it animates you, try to get a bigger picture of the event. Assume that vast majority of ALL CONTENT online is currently incentivized to engage you - to capture your attention, which is actually the most valuable asset you have. Where you put your attention will define how you feel about your life. It's highly advicable to put it where you feel love.Responsibility of the Collective:
Moving in hierarchies, we can start demanding that social media moderators (or whatever passes for those in any given site) prevent misinformation as much as possible. Try to only join communities that have mods that do this. Failing that, demand social media platforms prevent misinformation. Failing that, we can demand the government does more to prevent misinformation. All of those solutions have significant issues, one of them being they are all very incentivized to capture the attenttion of as many people as possible. Doesn't matter what the exact motivation is - it could be a geneinly good one. A news organization uses social media tactics to get the views so that their actually very factual and dilligently compiled articles get the spread. Or, they could be looking to drive their political agenda - which they necessarily do anyway because desire to be factual and as neutral as possible is a stance as well. One that may run afoul of the interests of some government that doesn't value freedom of press - which is very dangerous and you need to think hard for yourself how you feel about the idea of the government limiting what kind of information you can access. For the purposes of making this shorter, you can regard massive social media platforms as virtual governments too. In fact, it would be a good idea in general.The thing with misinformation is that many people who talk about it subtly think that they are above it themselves. They're thinking that they know they're not subject to propaganda and manipulation but it's the other poor fools that need to be protected from it. It's the Qanon and Antivaxxers. But you know better, you know how to dig deeper into massively complicated global topics and find out what the true and right opinion about them is. You can't. Not even if we weren't in the middle of multiple fucking information wars. You'd do well to focus on what you can know for sure, in your own experience. If you don't like the idea of individual responsibility though, because "most people aren't going to do it" - your best bet at getting a collective response is a group of individuals coming together under the same ideal. It'll happen sooner or later anyway and there's going to be plenty of suffering before either way.
[…] read how to read a book […]
Thank you for the recommendation
-
This post did not contain any content.
Teach people how to cite appropriately.
We learned how to do it in middle school, but I can tell most of my adult peers either didn't pay attention or forgot.
-
This post did not contain any content.
I don't know but the outright lies on Facebook are making me mad. People actually believe JK Rowling is suing HBO over the casting of Snape when in reality, she is helping produce the show and is fine with the casting