Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Microblog Memes
  3. text generator approves drugs

text generator approves drugs

Scheduled Pinned Locked Moved Microblog Memes
microblogmemes
50 Posts 41 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • S [email protected]

    Right, I'm no expert (and very far from an AI fanboi), but not all "AI" are LLMs. I've heard there's good use cases in protein folding, recognising diagnostic patterns in medical images.

    It fits with my understanding that you could train a similar model on more constrained datasets than 'all the English language text on the Internet' and it might be good at certain jobs.

    Am I wrong?

    alk@lemmy.blahaj.zoneA This user is from outside of this forum
    alk@lemmy.blahaj.zoneA This user is from outside of this forum
    [email protected]
    wrote last edited by
    #4

    You are correct. However, more often than not it's just like the image describes and people are actually applying LLM's en masse to random problems.

    1 Reply Last reply
    11
    • S [email protected]

      Right, I'm no expert (and very far from an AI fanboi), but not all "AI" are LLMs. I've heard there's good use cases in protein folding, recognising diagnostic patterns in medical images.

      It fits with my understanding that you could train a similar model on more constrained datasets than 'all the English language text on the Internet' and it might be good at certain jobs.

      Am I wrong?

      N This user is from outside of this forum
      N This user is from outside of this forum
      [email protected]
      wrote last edited by
      #5

      what ai, apart from language generators "makes up studies"

      1 Reply Last reply
      7
      • N [email protected]

        https://infosec.exchange/@malwaretech/114903901544041519

        the article since there is so much confusion what we are actually talking about
        https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makary

        T This user is from outside of this forum
        T This user is from outside of this forum
        [email protected]
        wrote last edited by
        #6

        That's the point though. When data means nothing truth is lost. It's far more sinister than people are aware it is. Why do you think it is literally being shoved into every little thing?

        match@pawb.socialM kogasa@programming.devK B 3 Replies Last reply
        14
        • S [email protected]

          Right, I'm no expert (and very far from an AI fanboi), but not all "AI" are LLMs. I've heard there's good use cases in protein folding, recognising diagnostic patterns in medical images.

          It fits with my understanding that you could train a similar model on more constrained datasets than 'all the English language text on the Internet' and it might be good at certain jobs.

          Am I wrong?

          M This user is from outside of this forum
          M This user is from outside of this forum
          [email protected]
          wrote last edited by
          #7

          Right. You're talking about specialized AI that are programmed and trained to perform very specific tasks, and are absolutely useless outside of those tasks.

          Llama are generalized AI which can't do any of those things. The problem is that what it's good at, really REALLY good at, is giving the appearance of specialized AI. Of course this is only a problem because people keep getting fooled into thinking that generalized AI can do all the same things that specialize AI does.

          1 Reply Last reply
          1
          • S [email protected]

            Right, I'm no expert (and very far from an AI fanboi), but not all "AI" are LLMs. I've heard there's good use cases in protein folding, recognising diagnostic patterns in medical images.

            It fits with my understanding that you could train a similar model on more constrained datasets than 'all the English language text on the Internet' and it might be good at certain jobs.

            Am I wrong?

            B This user is from outside of this forum
            B This user is from outside of this forum
            [email protected]
            wrote last edited by
            #8

            That’s because “AI” has come to mean anything with an algorithm and a training set. Technologies under this umbrella are vastly different, but nontechnical people (especially the press) don’t understand the difference.

            1 Reply Last reply
            1
            • S [email protected]

              Right, I'm no expert (and very far from an AI fanboi), but not all "AI" are LLMs. I've heard there's good use cases in protein folding, recognising diagnostic patterns in medical images.

              It fits with my understanding that you could train a similar model on more constrained datasets than 'all the English language text on the Internet' and it might be good at certain jobs.

              Am I wrong?

              J This user is from outside of this forum
              J This user is from outside of this forum
              [email protected]
              wrote last edited by
              #9

              Hallucinating studies is however very on brand for LLM as opposed to other types of machine learning.

              1 Reply Last reply
              3
              • S [email protected]

                Right, I'm no expert (and very far from an AI fanboi), but not all "AI" are LLMs. I've heard there's good use cases in protein folding, recognising diagnostic patterns in medical images.

                It fits with my understanding that you could train a similar model on more constrained datasets than 'all the English language text on the Internet' and it might be good at certain jobs.

                Am I wrong?

                jaredwhite@piefed.socialJ This user is from outside of this forum
                jaredwhite@piefed.socialJ This user is from outside of this forum
                [email protected]
                wrote last edited by
                #10

                Technically, LLMs as used in Generative AI fall under the umbrella term "machine learning"…except that until recently machine learning was mostly known for "the good stuff" you're referring to (finding patterns in massive datasets, classifying data entries like images, machine vision, etc.). So I feel like continuing to use the term ML for the good stuff helps steer the conversation away from what is clearly awful about genAI.

                P 1 Reply Last reply
                0
                • N [email protected]

                  https://infosec.exchange/@malwaretech/114903901544041519

                  the article since there is so much confusion what we are actually talking about
                  https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makary

                  archmageazor@lemmy.worldA This user is from outside of this forum
                  archmageazor@lemmy.worldA This user is from outside of this forum
                  [email protected]
                  wrote last edited by
                  #11

                  This reminds me of how like a hundred or so years ago people found "miracle substances" and just put them in everything.

                  "Uranium piles can level or power a whole city through the power of Radiation, just imagine what good this radium will do inside your jawbone!"

                  1 Reply Last reply
                  5
                  • N [email protected]

                    https://infosec.exchange/@malwaretech/114903901544041519

                    the article since there is so much confusion what we are actually talking about
                    https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makary

                    K This user is from outside of this forum
                    K This user is from outside of this forum
                    [email protected]
                    wrote last edited by
                    #12

                    From the mrna vaccines aren't tested well enough crowd

                    1 Reply Last reply
                    1
                    • N [email protected]

                      https://infosec.exchange/@malwaretech/114903901544041519

                      the article since there is so much confusion what we are actually talking about
                      https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makary

                      mushuchupacabra@lemmy.worldM This user is from outside of this forum
                      mushuchupacabra@lemmy.worldM This user is from outside of this forum
                      [email protected]
                      wrote last edited by
                      #13

                      Can you imagine how sad those LLMs will be if they make a mistake that winds up harming people?

                      A E B 3 Replies Last reply
                      0
                      • N [email protected]

                        https://infosec.exchange/@malwaretech/114903901544041519

                        the article since there is so much confusion what we are actually talking about
                        https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makary

                        oxysis@lemmy.blahaj.zoneO This user is from outside of this forum
                        oxysis@lemmy.blahaj.zoneO This user is from outside of this forum
                        [email protected]
                        wrote last edited by
                        #14

                        I was talking to some friends earlier about LLMs so I’ll just copy what I said and paste it here:

                        It really is like a 3d printer in a lot of ways. Marketed as a catch all solution and in reality it has a few things where it’s actually useful for. Still useful but not where you’d expect it to be given what it was hyped up to be.

                        G N 2 Replies Last reply
                        1
                        • N [email protected]

                          https://infosec.exchange/@malwaretech/114903901544041519

                          the article since there is so much confusion what we are actually talking about
                          https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makary

                          iavicenna@lemmy.worldI This user is from outside of this forum
                          iavicenna@lemmy.worldI This user is from outside of this forum
                          [email protected]
                          wrote last edited by [email protected]
                          #15

                          Yea I can say I called it. Instead of using graph neural networks trained for such a purpose (which have some actual chance of making novel drug discoveries), these idiots went on and asked chatgpt.

                          1 Reply Last reply
                          2
                          • mushuchupacabra@lemmy.worldM [email protected]

                            Can you imagine how sad those LLMs will be if they make a mistake that winds up harming people?

                            A This user is from outside of this forum
                            A This user is from outside of this forum
                            [email protected]
                            wrote last edited by
                            #16

                            About as sad as the CEO

                            1 Reply Last reply
                            1
                            • mushuchupacabra@lemmy.worldM [email protected]

                              Can you imagine how sad those LLMs will be if they make a mistake that winds up harming people?

                              E This user is from outside of this forum
                              E This user is from outside of this forum
                              [email protected]
                              wrote last edited by
                              #17

                              More so that the equivalent human? I have to think about this:
                              https://www.youtube.com/watch?v=sUdiafneqL8

                              1 Reply Last reply
                              0
                              • N [email protected]

                                https://infosec.exchange/@malwaretech/114903901544041519

                                the article since there is so much confusion what we are actually talking about
                                https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makary

                                K This user is from outside of this forum
                                K This user is from outside of this forum
                                [email protected]
                                wrote last edited by
                                #18

                                Oh shit. That MalwareTech? https://darknetdiaries.com/episode/158/

                                1 Reply Last reply
                                1
                                • T [email protected]

                                  That's the point though. When data means nothing truth is lost. It's far more sinister than people are aware it is. Why do you think it is literally being shoved into every little thing?

                                  match@pawb.socialM This user is from outside of this forum
                                  match@pawb.socialM This user is from outside of this forum
                                  [email protected]
                                  wrote last edited by
                                  #19

                                  lack of critical thinking is a feature in this administration

                                  M 1 Reply Last reply
                                  3
                                  • S [email protected]

                                    Right, I'm no expert (and very far from an AI fanboi), but not all "AI" are LLMs. I've heard there's good use cases in protein folding, recognising diagnostic patterns in medical images.

                                    It fits with my understanding that you could train a similar model on more constrained datasets than 'all the English language text on the Internet' and it might be good at certain jobs.

                                    Am I wrong?

                                    T This user is from outside of this forum
                                    T This user is from outside of this forum
                                    [email protected]
                                    wrote last edited by
                                    #20

                                    Yeah, AI (not LLM) can be a very useful tool in doing research, but this takes about deciding if a drug should be approved or not.

                                    1 Reply Last reply
                                    0
                                    • N [email protected]

                                      https://infosec.exchange/@malwaretech/114903901544041519

                                      the article since there is so much confusion what we are actually talking about
                                      https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makary

                                      B This user is from outside of this forum
                                      B This user is from outside of this forum
                                      [email protected]
                                      wrote last edited by
                                      #21

                                      So is this a situation where it's kinda like asking chatgpt to make you drugs so it will go about any means necessary (making up studies) to complete the task?

                                      Instead of reaching a wall and saying "I can't do that because there isn't enough data"
                                      I hope I'm wrong but if that's the case then that is next level stupid.

                                      B N 2 Replies Last reply
                                      0
                                      • jaredwhite@piefed.socialJ [email protected]

                                        Technically, LLMs as used in Generative AI fall under the umbrella term "machine learning"…except that until recently machine learning was mostly known for "the good stuff" you're referring to (finding patterns in massive datasets, classifying data entries like images, machine vision, etc.). So I feel like continuing to use the term ML for the good stuff helps steer the conversation away from what is clearly awful about genAI.

                                        P This user is from outside of this forum
                                        P This user is from outside of this forum
                                        [email protected]
                                        wrote last edited by
                                        #22

                                        There is no generative AI. It's just progressively more complicated chatbots. The goal is to fool the human into believing it's real.

                                        Its what Frank Herbert was warning us all about in 1965.

                                        S 1 Reply Last reply
                                        1
                                        • mushuchupacabra@lemmy.worldM [email protected]

                                          Can you imagine how sad those LLMs will be if they make a mistake that winds up harming people?

                                          B This user is from outside of this forum
                                          B This user is from outside of this forum
                                          [email protected]
                                          wrote last edited by
                                          #23

                                          Not at all, because they are not thinking nor feeling machines, merely algorithms that predict the likelyhood of words following other words and spit them out

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups