Study of 8k Posts Suggests 40+% of Facebook Posts are AI-Generated
-
[email protected]replied to [email protected] last edited by
Have you ever successfully berated a stranger into doing what you wanted them to do?
-
[email protected]replied to [email protected] last edited by
AI does give itself away over "longer" posts, and if the tool makes about an equal number of false positives to false negatives then it should even itself out in the long run. (I'd have liked more than 9K "tests" for it to average out, but even so.) If they had the edit history for the post, which they didn't, then it's more obvious. AI will either copy-paste the whole thing in in one go, or will generate a word at a time at a fairly constant rate. Humans will stop and think, go back and edit things, all of that.
I was asked to do some job interviews recently; the tech test had such an "animated playback", and the difference between a human doing it legitimately and someone using AI to copy-paste the answer was surprisingly obvious. The tech test questions were nothing to do with the job role at hand and were causing us to select for the wrong candidates completely, but that's more a problem with our HR being blindly in love with AI and "technical solutions to human problems".
"Absolute certainty" is impossible, but balance of probabilities will do if you're just wanting an estimate like they have here.
-
[email protected]replied to [email protected] last edited by
I have no idea whether the probabilities are balanced. They claim 5% was AI even before chatgpt was released, which seems pretty off. No one was using LLMs before chatgpt went viral except for researchers.
-
[email protected]replied to [email protected] last edited by
No I mean they literally label the post as “Gemini said this”
-
[email protected]replied to [email protected] last edited by
You know what I meant, by no one I mean “a large majority of users.”
-
[email protected]replied to [email protected] last edited by
Engagement is engagement, sustainability be damned.
-
[email protected]replied to [email protected] last edited by
Considering that they do automated analysis, 8k posts does not seem like a lot. But still very interesting.
-
[email protected]replied to [email protected] last edited by
I think you give them too much credit. As long as it doesn’t actively hurt their numbers, like x, it’s just part of the budget.
-
[email protected]replied to [email protected] last edited by
Title says 40% of posts but the article says 40% of long-form posts yet doesn't in any way specify what counts as a long-form post. My understanding is that the vast majority of Facebook posts are about the lenght of a tweet so I doubt that the title is even remotely accurate.
-
[email protected]replied to [email protected] last edited by
Im pretty sure chatbots were a thing before AI. They certainly werent as smart but they did exists.
-
[email protected]replied to [email protected] last edited by
Very true.
But also so stupid because their user base is, what, a good fraction of the planet? How can they grow?
-
[email protected]replied to [email protected] last edited by
This kind of just looks like an add for that companies AI detection software NGL.
-
[email protected]replied to [email protected] last edited by
I see no problem if the poster gives the info, that the source is AI. This automatically devalues the content of the post/comment and should trigger the reaction that this information is to be taken with a grain of salt and it needs to factchecked in order to improve likelihood that that what was written is fact.
An AI output is most of the time a good indicator about what the truth is, and can give new talking points to a discussion. But it is of course not a “killer-argument”.
-
[email protected]replied to [email protected] last edited by
this whole concept relies on the idea that we can reliably detect AI, which is just not true. None of these "ai detector" apps or services actually works reliably. They have terribly low success rates. the whole point of LLMs is to be indistinguishable from human text, so if they're working as intended then you can't really "detect" them.
So all of these claims, especially the precision to which they write the claims (24.05% etc), are almost meaningless unless the "detector" can be proven to work reliably.
-
[email protected]replied to [email protected] last edited by
Chatbots doesn't mean that they have a real conversation. Some just spammed links from a list of canned responses, or just upvoted the other chat bots to get more visibility, or the just reposted a comment from another user.
-
[email protected]replied to [email protected] last edited by
Yeah, the company that made the article is plugging their own AI-detection service, which I'm sure needs a couple of paragraphs to be at all accurate. For something in the range of just a sentence or two it's usually not going to be possible to detect an LLM.
-
[email protected]replied to [email protected] last edited by
FB has been junk for more than a decade now, AI or no.
I check mine every few weeks because I'm a sports announcer and it's one way people get in contact with me, but it's clear that FB designs its feed to piss me off and try to keep me doomscrolling, and I'm not a fan of having my day derailed.
-
[email protected]replied to [email protected] last edited by
If you could reliably detect "AI" using an "AI" you could also use an "AI" to make posts that the other "AI" couldn't detect.
-
[email protected]replied to [email protected] last edited by
The context is bad though.
The post I'm referencing is removed, but there was a tiny “from gemini” footnote in the bottom that most upvoters clearly missed, and the whole thing is presented like a quote from a news article and taken as fact by OP in their own commentary.
And the larger point I’m making is this pour soul had
no idea Gemini is basically an improv actor compelled to continue whatever it writes, not a research agent.My sister, ridiculously smart and more put together, didn’t either. She just searched for factual stuff from the Gemini app and assumed it’s directly searching the internet.
AI is a good thinker, analyzer, spitballer, initial source and stuff yes, but it’s being marketed like an oracle and that is going to screw the world up.
-
[email protected]replied to [email protected] last edited by
Sure, but then the generator AI is no longer optimised to generate whatever you wanted initially, but to generate text that fools the detector network, thus making the original generator worse at its intended job.