Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. AI cracks superbug problem in two days that took scientists years

AI cracks superbug problem in two days that took scientists years

Scheduled Pinned Locked Moved Technology
16 Posts 13 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • C This user is from outside of this forum
    C This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #1
    This post did not contain any content.
    a_a@lemmy.worldA snotflickerman@lemmy.blahaj.zoneS R M dojan@lemmy.worldD 9 Replies Last reply
    1
    0
    • C [email protected]
      This post did not contain any content.
      a_a@lemmy.worldA This user is from outside of this forum
      a_a@lemmy.worldA This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #2

      it's not word completion, its so far from it :

      (...) He told the BBC of his shock when he found what it had done, given his research was not published so could not have been found by the AI system in the public domain. (...)

      (...) "It's not just that the top hypothesis they provide was the right one," he said.
      "It's that they provide another four, and all of them made sense.
      "And for one of them, we never thought about it, and we're now working on that." (...)

      D 1 Reply Last reply
      0
      • C [email protected]
        This post did not contain any content.
        snotflickerman@lemmy.blahaj.zoneS This user is from outside of this forum
        snotflickerman@lemmy.blahaj.zoneS This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #3

        Google doesn't need access to all his unpublished research if he's ever mentioned anything about it online or in an email that went to a gmail address.

        Further, University of Cambridge runs on Microsoft Exchange and University of Glasgow uses Office365.

        Not to put to fine a point on it, but they don't need access to your computer and this feels a little bit overhyped.

        Also just because it came to the same conclusion means about as much as it coming to the wrong conclusion, does it not? Since there is no actual "thinking" in these devices? How do we know the "right" conclusion wasn't merely a hallucination?

        flisty@mstdn.socialF 1 Reply Last reply
        0
        • C [email protected]
          This post did not contain any content.
          R This user is from outside of this forum
          R This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #4

          It's so easy to ask a question in such a way that the statistically most likely answer is the one at the front of your mind.

          1 Reply Last reply
          0
          • C [email protected]
            This post did not contain any content.
            M This user is from outside of this forum
            M This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #5

            Great! We have a tested solution and scalled up th3 drug to treat the issue. And in 2 days! Great!

            Oh, that is not what we have?

            1 Reply Last reply
            0
            • C [email protected]
              This post did not contain any content.
              dojan@lemmy.worldD This user is from outside of this forum
              dojan@lemmy.worldD This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #6

              "I wrote an email to Google to say, 'you have access to my computer, is that right?'", he added.

              lmao right, because the support person they reached, if indeed they even spoke to a person at all, would know and divulge the sources they train on. They may think that all their research is private but they're making use of these tech giant services. These tech giants have blatantly showed that they're OK with piracy and copyright infringement to further their goals, why would spying on research institutions be any different?

              If you want to give it a run for its money, give it a novel problem that isn't solved, and see what it comes up with.

              D a_a@lemmy.worldA trenchcoatfullofbats@belfry.ripT 3 Replies Last reply
              0
              • a_a@lemmy.worldA [email protected]

                it's not word completion, its so far from it :

                (...) He told the BBC of his shock when he found what it had done, given his research was not published so could not have been found by the AI system in the public domain. (...)

                (...) "It's not just that the top hypothesis they provide was the right one," he said.
                "It's that they provide another four, and all of them made sense.
                "And for one of them, we never thought about it, and we're now working on that." (...)

                D This user is from outside of this forum
                D This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #7

                Assuming Open AI ect only use data from the public domain is stupid (and contrary to most news sources on the matter). He has literally no idea what the AI has trained on (not even developers know, because there's just too much of it to be reviewed by humans). They've undoubtedly bought countless amounts of data that isn't readily searchable by public engines.

                He sounds very ill informed on the matter of data collection and probably just had his info/data on a cloud service somewhere whose text was part of the trillions of terrabytes LLM have accessed and trained on.

                a_a@lemmy.worldA 1 Reply Last reply
                0
                • snotflickerman@lemmy.blahaj.zoneS [email protected]

                  Google doesn't need access to all his unpublished research if he's ever mentioned anything about it online or in an email that went to a gmail address.

                  Further, University of Cambridge runs on Microsoft Exchange and University of Glasgow uses Office365.

                  Not to put to fine a point on it, but they don't need access to your computer and this feels a little bit overhyped.

                  Also just because it came to the same conclusion means about as much as it coming to the wrong conclusion, does it not? Since there is no actual "thinking" in these devices? How do we know the "right" conclusion wasn't merely a hallucination?

                  flisty@mstdn.socialF This user is from outside of this forum
                  flisty@mstdn.socialF This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #8

                  @SnotFlickerman @cm0002 unless he's done the research himself he won't know whether the results are viable - as he says, they've got to test the "new" one. So at best it gives you a bit of a head start on new avenues, at worst it completely wastes your time down a new rabbithole.

                  1 Reply Last reply
                  0
                  • D [email protected]

                    Assuming Open AI ect only use data from the public domain is stupid (and contrary to most news sources on the matter). He has literally no idea what the AI has trained on (not even developers know, because there's just too much of it to be reviewed by humans). They've undoubtedly bought countless amounts of data that isn't readily searchable by public engines.

                    He sounds very ill informed on the matter of data collection and probably just had his info/data on a cloud service somewhere whose text was part of the trillions of terrabytes LLM have accessed and trained on.

                    a_a@lemmy.worldA This user is from outside of this forum
                    a_a@lemmy.worldA This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #9

                    it seems you did not read my comment in entirety.

                    1 Reply Last reply
                    0
                    • dojan@lemmy.worldD [email protected]

                      "I wrote an email to Google to say, 'you have access to my computer, is that right?'", he added.

                      lmao right, because the support person they reached, if indeed they even spoke to a person at all, would know and divulge the sources they train on. They may think that all their research is private but they're making use of these tech giant services. These tech giants have blatantly showed that they're OK with piracy and copyright infringement to further their goals, why would spying on research institutions be any different?

                      If you want to give it a run for its money, give it a novel problem that isn't solved, and see what it comes up with.

                      D This user is from outside of this forum
                      D This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #10

                      Large Language companies weren't even aware their data (which is so large they themselves have no idea what's in it) had other languages.

                      So the models suddenly knew how to speak other languages. The above story feels like those stories "Large Language Models are super intelligent! They've taught themselves French!" - no, mass surveillance and corporations being above the law taught them everything they know.

                      1 Reply Last reply
                      0
                      • C [email protected]
                        This post did not contain any content.
                        R This user is from outside of this forum
                        R This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #11

                        if this is machine learning and neural networks, I can believe it's a good thing, maybe even meaningful for the potential of so called artificial intelligence.

                        if this is an LLM that's alleged to have popped this "virus tail" theory out of... what exactly...? I'm not buying it.

                        1 Reply Last reply
                        0
                        • C [email protected]
                          This post did not contain any content.
                          F This user is from outside of this forum
                          F This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #12

                          Uh no, the AI didn't crack any problem.

                          The AI produced the same hypothesis that a scientist produced, one that the scientist considered his own original awesome idea.

                          But the truth is that science is less about producing awesome ideas and more about proving them. And AI did nothing in this regard, except to remind scientists that their original awesome ideas are often not so original.

                          There's even a term scientists use when another scientist has the same idea but actually managed to do the work of proving it: "scooped". It's a very common occurrence. It didn't happen here.

                          1 Reply Last reply
                          0
                          • dojan@lemmy.worldD [email protected]

                            "I wrote an email to Google to say, 'you have access to my computer, is that right?'", he added.

                            lmao right, because the support person they reached, if indeed they even spoke to a person at all, would know and divulge the sources they train on. They may think that all their research is private but they're making use of these tech giant services. These tech giants have blatantly showed that they're OK with piracy and copyright infringement to further their goals, why would spying on research institutions be any different?

                            If you want to give it a run for its money, give it a novel problem that isn't solved, and see what it comes up with.

                            a_a@lemmy.worldA This user is from outside of this forum
                            a_a@lemmy.worldA This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #13

                            (...) If you want to give it a run for its money, give it a novel problem that isn’t solved, and see what it comes up with.

                            You mean like searchers have done ...
                            ::: spoiler ... in here : ?
                            https://bturtel.substack.com/p/human-all-too-human
                            For AI to learn something fundamentally new - something it cannot be taught by humans - it requires exploration and ground-truth feedback.
                            .
                            https://www.lightningrod.ai/
                            We’re enabling self-play that learns directly from real world feedback.
                            :::

                            1 Reply Last reply
                            0
                            • C [email protected]
                              This post did not contain any content.
                              J This user is from outside of this forum
                              J This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #14

                              When AI decides to destroy the human virus, it now knows exactly how to create a bug capable of it. Probably more likely than pumping out a bunch of humanoid robots with guns, just create a bug, spread it around, and mess with our ability to communicate in time to stop the spread. BAM. Easy-peasy, humans are now down to a manageable 1 billion or so individuals.

                              1 Reply Last reply
                              0
                              • dojan@lemmy.worldD [email protected]

                                "I wrote an email to Google to say, 'you have access to my computer, is that right?'", he added.

                                lmao right, because the support person they reached, if indeed they even spoke to a person at all, would know and divulge the sources they train on. They may think that all their research is private but they're making use of these tech giant services. These tech giants have blatantly showed that they're OK with piracy and copyright infringement to further their goals, why would spying on research institutions be any different?

                                If you want to give it a run for its money, give it a novel problem that isn't solved, and see what it comes up with.

                                trenchcoatfullofbats@belfry.ripT This user is from outside of this forum
                                trenchcoatfullofbats@belfry.ripT This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #15

                                "I wrote an email to Google Gryzzl to say, 'you have access to my computer, is that right?'", he added.

                                Later that day

                                1 Reply Last reply
                                0
                                • C [email protected]
                                  This post did not contain any content.
                                  H This user is from outside of this forum
                                  H This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #16

                                  Now if you'd all just empty your wallets into the AI bonfire. Thaaaat's right.

                                  1 Reply Last reply
                                  0
                                  • System shared this topic on
                                    System shared this topic on
                                  Reply
                                  • Reply as topic
                                  Log in to reply
                                  • Oldest to Newest
                                  • Newest to Oldest
                                  • Most Votes


                                  • Login

                                  • Login or register to search.
                                  • First post
                                    Last post
                                  0
                                  • Categories
                                  • Recent
                                  • Tags
                                  • Popular
                                  • World
                                  • Users
                                  • Groups