text generator approves drugs
-
Right, I'm no expert (and very far from an AI fanboi), but not all "AI" are LLMs. I've heard there's good use cases in protein folding, recognising diagnostic patterns in medical images.
It fits with my understanding that you could train a similar model on more constrained datasets than 'all the English language text on the Internet' and it might be good at certain jobs.
Am I wrong?
That’s because “AI” has come to mean anything with an algorithm and a training set. Technologies under this umbrella are vastly different, but nontechnical people (especially the press) don’t understand the difference.
-
Right, I'm no expert (and very far from an AI fanboi), but not all "AI" are LLMs. I've heard there's good use cases in protein folding, recognising diagnostic patterns in medical images.
It fits with my understanding that you could train a similar model on more constrained datasets than 'all the English language text on the Internet' and it might be good at certain jobs.
Am I wrong?
Hallucinating studies is however very on brand for LLM as opposed to other types of machine learning.
-
Right, I'm no expert (and very far from an AI fanboi), but not all "AI" are LLMs. I've heard there's good use cases in protein folding, recognising diagnostic patterns in medical images.
It fits with my understanding that you could train a similar model on more constrained datasets than 'all the English language text on the Internet' and it might be good at certain jobs.
Am I wrong?
Technically, LLMs as used in Generative AI fall under the umbrella term "machine learning"…except that until recently machine learning was mostly known for "the good stuff" you're referring to (finding patterns in massive datasets, classifying data entries like images, machine vision, etc.). So I feel like continuing to use the term ML for the good stuff helps steer the conversation away from what is clearly awful about genAI.
-
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about
https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makaryThis reminds me of how like a hundred or so years ago people found "miracle substances" and just put them in everything.
"Uranium piles can level or power a whole city through the power of Radiation, just imagine what good this radium will do inside your jawbone!"
-
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about
https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makaryFrom the mrna vaccines aren't tested well enough crowd
-
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about
https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makaryCan you imagine how sad those LLMs will be if they make a mistake that winds up harming people?
-
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about
https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makaryI was talking to some friends earlier about LLMs so I’ll just copy what I said and paste it here:
It really is like a 3d printer in a lot of ways. Marketed as a catch all solution and in reality it has a few things where it’s actually useful for. Still useful but not where you’d expect it to be given what it was hyped up to be.
-
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about
https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makarywrote last edited by [email protected]Yea I can say I called it. Instead of using graph neural networks trained for such a purpose (which have some actual chance of making novel drug discoveries), these idiots went on and asked chatgpt.
-
Can you imagine how sad those LLMs will be if they make a mistake that winds up harming people?
About as sad as the CEO
-
Can you imagine how sad those LLMs will be if they make a mistake that winds up harming people?
More so that the equivalent human? I have to think about this:
https://www.youtube.com/watch?v=sUdiafneqL8 -
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about
https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makaryOh shit. That MalwareTech? https://darknetdiaries.com/episode/158/
-
That's the point though. When data means nothing truth is lost. It's far more sinister than people are aware it is. Why do you think it is literally being shoved into every little thing?
lack of critical thinking is a feature in this administration
-
Right, I'm no expert (and very far from an AI fanboi), but not all "AI" are LLMs. I've heard there's good use cases in protein folding, recognising diagnostic patterns in medical images.
It fits with my understanding that you could train a similar model on more constrained datasets than 'all the English language text on the Internet' and it might be good at certain jobs.
Am I wrong?
Yeah, AI (not LLM) can be a very useful tool in doing research, but this takes about deciding if a drug should be approved or not.
-
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about
https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makarySo is this a situation where it's kinda like asking chatgpt to make you drugs so it will go about any means necessary (making up studies) to complete the task?
Instead of reaching a wall and saying "I can't do that because there isn't enough data"
I hope I'm wrong but if that's the case then that is next level stupid. -
Technically, LLMs as used in Generative AI fall under the umbrella term "machine learning"…except that until recently machine learning was mostly known for "the good stuff" you're referring to (finding patterns in massive datasets, classifying data entries like images, machine vision, etc.). So I feel like continuing to use the term ML for the good stuff helps steer the conversation away from what is clearly awful about genAI.
There is no generative AI. It's just progressively more complicated chatbots. The goal is to fool the human into believing it's real.
Its what Frank Herbert was warning us all about in 1965.
-
Can you imagine how sad those LLMs will be if they make a mistake that winds up harming people?
Not at all, because they are not thinking nor feeling machines, merely algorithms that predict the likelyhood of words following other words and spit them out
-
So is this a situation where it's kinda like asking chatgpt to make you drugs so it will go about any means necessary (making up studies) to complete the task?
Instead of reaching a wall and saying "I can't do that because there isn't enough data"
I hope I'm wrong but if that's the case then that is next level stupid.Yeah, they're using a glorified autocorrect tool to do data analysis, something that other, different types of machine learning models might be able to do with some decent results, but LLMs are not built for.
-
That's the point though. When data means nothing truth is lost. It's far more sinister than people are aware it is. Why do you think it is literally being shoved into every little thing?
Capitalizing on a highly marketable hype bubble because the technology is specifically designed to deceive people into thinking it's more capable than it is
-
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about
https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makaryI'm constantly mystified at the huge gap between all these "new model obliterates all benchmarks/passes the bar exam/writes PhD thesis" stories and my actual experience with said model.
-
Yeah, they're using a glorified autocorrect tool to do data analysis, something that other, different types of machine learning models might be able to do with some decent results, but LLMs are not built for.
Gotcha, thanks