Study of 8k Posts Suggests 40+% of Facebook Posts are AI-Generated
-
[email protected]replied to [email protected] last edited by
Considering that they do automated analysis, 8k posts does not seem like a lot. But still very interesting.
-
[email protected]replied to [email protected] last edited by
I think you give them too much credit. As long as it doesn’t actively hurt their numbers, like x, it’s just part of the budget.
-
[email protected]replied to [email protected] last edited by
Title says 40% of posts but the article says 40% of long-form posts yet doesn't in any way specify what counts as a long-form post. My understanding is that the vast majority of Facebook posts are about the lenght of a tweet so I doubt that the title is even remotely accurate.
-
[email protected]replied to [email protected] last edited by
Im pretty sure chatbots were a thing before AI. They certainly werent as smart but they did exists.
-
[email protected]replied to [email protected] last edited by
Very true.
But also so stupid because their user base is, what, a good fraction of the planet? How can they grow?
-
[email protected]replied to [email protected] last edited by
This kind of just looks like an add for that companies AI detection software NGL.
-
[email protected]replied to [email protected] last edited by
I see no problem if the poster gives the info, that the source is AI. This automatically devalues the content of the post/comment and should trigger the reaction that this information is to be taken with a grain of salt and it needs to factchecked in order to improve likelihood that that what was written is fact.
An AI output is most of the time a good indicator about what the truth is, and can give new talking points to a discussion. But it is of course not a “killer-argument”.
-
[email protected]replied to [email protected] last edited by
this whole concept relies on the idea that we can reliably detect AI, which is just not true. None of these "ai detector" apps or services actually works reliably. They have terribly low success rates. the whole point of LLMs is to be indistinguishable from human text, so if they're working as intended then you can't really "detect" them.
So all of these claims, especially the precision to which they write the claims (24.05% etc), are almost meaningless unless the "detector" can be proven to work reliably.
-
[email protected]replied to [email protected] last edited by
Chatbots doesn't mean that they have a real conversation. Some just spammed links from a list of canned responses, or just upvoted the other chat bots to get more visibility, or the just reposted a comment from another user.
-
[email protected]replied to [email protected] last edited by
Yeah, the company that made the article is plugging their own AI-detection service, which I'm sure needs a couple of paragraphs to be at all accurate. For something in the range of just a sentence or two it's usually not going to be possible to detect an LLM.
-
[email protected]replied to [email protected] last edited by
FB has been junk for more than a decade now, AI or no.
I check mine every few weeks because I'm a sports announcer and it's one way people get in contact with me, but it's clear that FB designs its feed to piss me off and try to keep me doomscrolling, and I'm not a fan of having my day derailed.
-
[email protected]replied to [email protected] last edited by
If you could reliably detect "AI" using an "AI" you could also use an "AI" to make posts that the other "AI" couldn't detect.
-
[email protected]replied to [email protected] last edited by
The context is bad though.
The post I'm referencing is removed, but there was a tiny “from gemini” footnote in the bottom that most upvoters clearly missed, and the whole thing is presented like a quote from a news article and taken as fact by OP in their own commentary.
And the larger point I’m making is this pour soul had
no idea Gemini is basically an improv actor compelled to continue whatever it writes, not a research agent.My sister, ridiculously smart and more put together, didn’t either. She just searched for factual stuff from the Gemini app and assumed it’s directly searching the internet.
AI is a good thinker, analyzer, spitballer, initial source and stuff yes, but it’s being marketed like an oracle and that is going to screw the world up.
-
[email protected]replied to [email protected] last edited by
Sure, but then the generator AI is no longer optimised to generate whatever you wanted initially, but to generate text that fools the detector network, thus making the original generator worse at its intended job.
-
[email protected]replied to [email protected] last edited by
I deleted facebook in like 2010 or so, because i hardly ever used it anyway, it wasn't really bad back then, just not for me. 6 or so years later a friend of mine wanted to show me something on fb, but couldn't find it, so he was just scrolling, i was blown away how bad it was, just ads and auto played videos and absolute garbage. And from what i understand, it just got worse and worse. Everyone i know now that uses facebook is for the market place.
-
[email protected]replied to [email protected] last edited by
It's such a cesspit.
I'm glad we have the Fediverse.
-
[email protected]replied to [email protected] last edited by
I did not know that. There’s a bunch of news articles going around claiming that even the creators of the models don’t understand them and that they are some sort of unfathomable magic black box. I assumed you were propagating that myth, but I was clearly mistaken.
-
[email protected]replied to [email protected] last edited by
That’s an extremely low sample size for this
-
[email protected]replied to [email protected] last edited by
I see no reason why "post right wing propaganda" and "write so you don't sound like "AI" " should be conflicting goals.
The actual argument why I don't find such results credible is that the "creator" is trained to sound like humans, so the "detector" has to be trained to find stuff that does not sound like humans. This means, both basically have to solve the same task: Decide if something sounds like a human.
To be able to find the "AI" content, the "detector" would have to be better at deciding what sounds like a human than the "creator". So for the results to have any kind of accuracy, you're already banking on the "detector" company having more processing power / better training data / more money than, say, OpenAI or google.
But also, if the "detector" was better at the job, it could be used as a better "creator" itself. Then, how would we distinguish the content it created?
-
[email protected]replied to [email protected] last edited by
If you want to visit your old friends in the dying mall. Go to feeds then friends. Should filter everything else out.