text generator approves drugs
-
That's the point though. When data means nothing truth is lost. It's far more sinister than people are aware it is. Why do you think it is literally being shoved into every little thing?
Capitalizing on a highly marketable hype bubble because the technology is specifically designed to deceive people into thinking it's more capable than it is
-
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about
https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makaryI'm constantly mystified at the huge gap between all these "new model obliterates all benchmarks/passes the bar exam/writes PhD thesis" stories and my actual experience with said model.
-
Yeah, they're using a glorified autocorrect tool to do data analysis, something that other, different types of machine learning models might be able to do with some decent results, but LLMs are not built for.
Gotcha, thanks
-
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about
https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makaryOur world was doomed to go into the gutter the moment we started giving any credence to the marketing bastards.
-
I was talking to some friends earlier about LLMs so I’ll just copy what I said and paste it here:
It really is like a 3d printer in a lot of ways. Marketed as a catch all solution and in reality it has a few things where it’s actually useful for. Still useful but not where you’d expect it to be given what it was hyped up to be.
From what I've seen, 3D printers are best at oversaturating local markets with a bunch of useless trinkets. (Just kidding though I know they have legit medical uses but oh my god)
-
From what I've seen, 3D printers are best at oversaturating local markets with a bunch of useless trinkets. (Just kidding though I know they have legit medical uses but oh my god)
Yeah they do create a lot of weird trinkets but they do have plenty of good uses. It can be used to make props and pieces of an outfit. Really really good for cosplayers. It can have a limited role in engineering. Mostly in low heat and stress environments for the engineering side of things but still useful for a cheap decently easy to replace part. Like you said it has medical applications.
It’s not an insanely useful tool for most people in most conditions but it’s still a great tool to have in some cases.
-
I was talking to some friends earlier about LLMs so I’ll just copy what I said and paste it here:
It really is like a 3d printer in a lot of ways. Marketed as a catch all solution and in reality it has a few things where it’s actually useful for. Still useful but not where you’d expect it to be given what it was hyped up to be.
yeah like guns for example /s
-
So is this a situation where it's kinda like asking chatgpt to make you drugs so it will go about any means necessary (making up studies) to complete the task?
Instead of reaching a wall and saying "I can't do that because there isn't enough data"
I hope I'm wrong but if that's the case then that is next level stupid.no it's supposed to help help the drug approval workers but everything it says has to be double checked so it ends up wasting time i put the article into the post now
-
I'm constantly mystified at the huge gap between all these "new model obliterates all benchmarks/passes the bar exam/writes PhD thesis" stories and my actual experience with said model.
The real truth is just that standardized testing fucking sucks and always has
-
I'm constantly mystified at the huge gap between all these "new model obliterates all benchmarks/passes the bar exam/writes PhD thesis" stories and my actual experience with said model.
Likely those new models are varients trained specifically on the exact material needed to perform those tasks, essentially passing the bar exam as if it were open book.
-
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about
https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makarySomeone needs to to a test, when this AI launches, they need to try and get poison approved as a medication. Like straight up a lethal dose of cyanide or something.
-
From what I've seen, 3D printers are best at oversaturating local markets with a bunch of useless trinkets. (Just kidding though I know they have legit medical uses but oh my god)
I feel that's even worse with laser cutters and the cricut stuff, just useless trinkets wasting resources and ending up in drawers/gathering dust/in the bin.
-
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about
https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makaryI'm pretty sure that undermining confidence in drug approvals is a feature, not a bug. The same people who were screeching about mRNA vaccines being secret poison that was rushed through approval are the ones doing this now, so when (not if) it does actually lead to dangerous drugs being approved and a collapse in confidence in the FDA, they'll be the ones saying "We told you so" and getting their anti-medical way.
It's the exact same playbook Republicans use in the rest of the government: Say Government doesn't work, cry about government spending, and insist government regulation is crushing personal freedoms, then they actually do all of those things and when the next administration comes around they pass on the blame and say "I Told You So."
-
Someone needs to to a test, when this AI launches, they need to try and get poison approved as a medication. Like straight up a lethal dose of cyanide or something.
What happens when people realise and it immediately become the most popular drug on the market?
-
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about
https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makaryLiteral... I cannot stress this enough... Literal Idiocracy.
This is literally what happens in the film. Like the first 10 minutes.
Fuck.
-
That's the point though. When data means nothing truth is lost. It's far more sinister than people are aware it is. Why do you think it is literally being shoved into every little thing?
It is already making pictorial evidence worthless, which is a scary thought no justice system has even begun considering yet, even though it is literally already happening. Criminals all over the world rejoice, they can be caught doing the act on video, and it will be worthless. Of course this applies even more to large scale criminals like dictators. It will all be "fake news" from now on.
-
What happens when people realise and it immediately become the most popular drug on the market?
Win-win?
-
Likely those new models are varients trained specifically on the exact material needed to perform those tasks, essentially passing the bar exam as if it were open book.
Reminds me of a video that starts with the fact you can't convince image generating AI to draw a wine glass filled to the brim. AI is great at replicating the patterns that it has seen and been trained on, like full wine glasses, but it doesn't actually know why it works or how it works. It doesn't know the things we humans know intuitively, like "filled to the brim means more liquid than full". It knows the what but doesn't get the why.
The same could apply to testing. AI knows how you solve test pages, but wouldn't be that exact if you were to try adapting it into real life.
-
Right, I'm no expert (and very far from an AI fanboi), but not all "AI" are LLMs. I've heard there's good use cases in protein folding, recognising diagnostic patterns in medical images.
It fits with my understanding that you could train a similar model on more constrained datasets than 'all the English language text on the Internet' and it might be good at certain jobs.
Am I wrong?
The problems with AI we talk of here is mostly with generative AI. Protein folding, diagnostic patterns and weather prediction works a bit differently than image making or text writing services.
-
There is no generative AI. It's just progressively more complicated chatbots. The goal is to fool the human into believing it's real.
Its what Frank Herbert was warning us all about in 1965.
What was Frank on about? The Butlerian Jihad I assume? Read the book 8 times and don't remember why thinking machines had gone rogue. ?