text generator approves drugs
-
Right, I'm no expert (and very far from an AI fanboi), but not all "AI" are LLMs. I've heard there's good use cases in protein folding, recognising diagnostic patterns in medical images.
It fits with my understanding that you could train a similar model on more constrained datasets than 'all the English language text on the Internet' and it might be good at certain jobs.
Am I wrong?
Obviously that should be in an advisory capacity, and not making decisions (like approving drugs for human use [which i heavy doubt was actually happening])
-
Right, I'm no expert (and very far from an AI fanboi), but not all "AI" are LLMs. I've heard there's good use cases in protein folding, recognising diagnostic patterns in medical images.
It fits with my understanding that you could train a similar model on more constrained datasets than 'all the English language text on the Internet' and it might be good at certain jobs.
Am I wrong?
You are correct. However, more often than not it's just like the image describes and people are actually applying LLM's en masse to random problems.
-
Right, I'm no expert (and very far from an AI fanboi), but not all "AI" are LLMs. I've heard there's good use cases in protein folding, recognising diagnostic patterns in medical images.
It fits with my understanding that you could train a similar model on more constrained datasets than 'all the English language text on the Internet' and it might be good at certain jobs.
Am I wrong?
what ai, apart from language generators "makes up studies"
-
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about
https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makaryThat's the point though. When data means nothing truth is lost. It's far more sinister than people are aware it is. Why do you think it is literally being shoved into every little thing?
-
Right, I'm no expert (and very far from an AI fanboi), but not all "AI" are LLMs. I've heard there's good use cases in protein folding, recognising diagnostic patterns in medical images.
It fits with my understanding that you could train a similar model on more constrained datasets than 'all the English language text on the Internet' and it might be good at certain jobs.
Am I wrong?
Right. You're talking about specialized AI that are programmed and trained to perform very specific tasks, and are absolutely useless outside of those tasks.
Llama are generalized AI which can't do any of those things. The problem is that what it's good at, really REALLY good at, is giving the appearance of specialized AI. Of course this is only a problem because people keep getting fooled into thinking that generalized AI can do all the same things that specialize AI does.
-
Right, I'm no expert (and very far from an AI fanboi), but not all "AI" are LLMs. I've heard there's good use cases in protein folding, recognising diagnostic patterns in medical images.
It fits with my understanding that you could train a similar model on more constrained datasets than 'all the English language text on the Internet' and it might be good at certain jobs.
Am I wrong?
That’s because “AI” has come to mean anything with an algorithm and a training set. Technologies under this umbrella are vastly different, but nontechnical people (especially the press) don’t understand the difference.
-
Right, I'm no expert (and very far from an AI fanboi), but not all "AI" are LLMs. I've heard there's good use cases in protein folding, recognising diagnostic patterns in medical images.
It fits with my understanding that you could train a similar model on more constrained datasets than 'all the English language text on the Internet' and it might be good at certain jobs.
Am I wrong?
Hallucinating studies is however very on brand for LLM as opposed to other types of machine learning.
-
Right, I'm no expert (and very far from an AI fanboi), but not all "AI" are LLMs. I've heard there's good use cases in protein folding, recognising diagnostic patterns in medical images.
It fits with my understanding that you could train a similar model on more constrained datasets than 'all the English language text on the Internet' and it might be good at certain jobs.
Am I wrong?
Technically, LLMs as used in Generative AI fall under the umbrella term "machine learning"…except that until recently machine learning was mostly known for "the good stuff" you're referring to (finding patterns in massive datasets, classifying data entries like images, machine vision, etc.). So I feel like continuing to use the term ML for the good stuff helps steer the conversation away from what is clearly awful about genAI.
-
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about
https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makaryThis reminds me of how like a hundred or so years ago people found "miracle substances" and just put them in everything.
"Uranium piles can level or power a whole city through the power of Radiation, just imagine what good this radium will do inside your jawbone!"
-
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about
https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makaryFrom the mrna vaccines aren't tested well enough crowd
-
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about
https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makaryCan you imagine how sad those LLMs will be if they make a mistake that winds up harming people?
-
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about
https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makaryI was talking to some friends earlier about LLMs so I’ll just copy what I said and paste it here:
It really is like a 3d printer in a lot of ways. Marketed as a catch all solution and in reality it has a few things where it’s actually useful for. Still useful but not where you’d expect it to be given what it was hyped up to be.
-
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about
https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makarywrote last edited by [email protected]Yea I can say I called it. Instead of using graph neural networks trained for such a purpose (which have some actual chance of making novel drug discoveries), these idiots went on and asked chatgpt.
-
Can you imagine how sad those LLMs will be if they make a mistake that winds up harming people?
About as sad as the CEO
-
Can you imagine how sad those LLMs will be if they make a mistake that winds up harming people?
More so that the equivalent human? I have to think about this:
https://www.youtube.com/watch?v=sUdiafneqL8 -
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about
https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makaryOh shit. That MalwareTech? https://darknetdiaries.com/episode/158/
-
That's the point though. When data means nothing truth is lost. It's far more sinister than people are aware it is. Why do you think it is literally being shoved into every little thing?
lack of critical thinking is a feature in this administration
-
Right, I'm no expert (and very far from an AI fanboi), but not all "AI" are LLMs. I've heard there's good use cases in protein folding, recognising diagnostic patterns in medical images.
It fits with my understanding that you could train a similar model on more constrained datasets than 'all the English language text on the Internet' and it might be good at certain jobs.
Am I wrong?
Yeah, AI (not LLM) can be a very useful tool in doing research, but this takes about deciding if a drug should be approved or not.
-
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about
https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makarySo is this a situation where it's kinda like asking chatgpt to make you drugs so it will go about any means necessary (making up studies) to complete the task?
Instead of reaching a wall and saying "I can't do that because there isn't enough data"
I hope I'm wrong but if that's the case then that is next level stupid. -
Technically, LLMs as used in Generative AI fall under the umbrella term "machine learning"…except that until recently machine learning was mostly known for "the good stuff" you're referring to (finding patterns in massive datasets, classifying data entries like images, machine vision, etc.). So I feel like continuing to use the term ML for the good stuff helps steer the conversation away from what is clearly awful about genAI.
There is no generative AI. It's just progressively more complicated chatbots. The goal is to fool the human into believing it's real.
Its what Frank Herbert was warning us all about in 1965.