Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Programmer Humor
  3. the beautiful code

the beautiful code

Scheduled Pinned Locked Moved Programmer Humor
programmerhumor
226 Posts 135 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Z [email protected]

    You say they have no knowledge and are only good for boilerplate. So you're contradicting yourself there.

    W This user is from outside of this forum
    W This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #221

    I didn't say they have no knowledge, quite the opposite. Here a quote from the comment you answered:

    LLMs are extremely knowledgeable (as in they "know" a lot) but are completely dumb.

    There is a subtle difference between intelligent and knowledgeable. LLM know a lot in that sense that they can remember a lot of things, but they are dumb in that sense that they are completely unable to draw conclusions and put that knowledge into action in any other means besides spitting out again what they once learned.

    That's why LLMs can tell you a lot about about all different kinds of game theory about tic tac toe but can't draw/win that game consistently.

    So knowing a lot and still being dumb is not a contradiction.

    1 Reply Last reply
    0
    • C [email protected]

      Is there a chance that's right around the time the code no longer fits into the LLMs input window of tokens? The basic technology doesn't actually have a long term memory of any kind (at least outside of the training phase).

      maggiwuerze@feddit.orgM This user is from outside of this forum
      maggiwuerze@feddit.orgM This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #222

      Was my first thought as well. These things really need to find a way to store a larger context without ballooning past the vram limit

      C 1 Reply Last reply
      0
      • maggiwuerze@feddit.orgM [email protected]

        Was my first thought as well. These things really need to find a way to store a larger context without ballooning past the vram limit

        C This user is from outside of this forum
        C This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #223

        The thing being, it's kind of an inflexible blackbox technology, and that's easier said than done. In one fell swoop we've gotten all that soft, fuzzy common sense stuff that people were chasing for decades inside a computer, but it's ironically still beyond our reach to fully use.

        From here, I either expect that steady progress will be made in finding more clever and constrained ways of using the raw neural net output, or we're back to an AI winter. I suppose it's possible a new architecture and/or training scheme will come along, but it doesn't seem imminent.

        maggiwuerze@feddit.orgM 1 Reply Last reply
        0
        • C [email protected]

          The thing being, it's kind of an inflexible blackbox technology, and that's easier said than done. In one fell swoop we've gotten all that soft, fuzzy common sense stuff that people were chasing for decades inside a computer, but it's ironically still beyond our reach to fully use.

          From here, I either expect that steady progress will be made in finding more clever and constrained ways of using the raw neural net output, or we're back to an AI winter. I suppose it's possible a new architecture and/or training scheme will come along, but it doesn't seem imminent.

          maggiwuerze@feddit.orgM This user is from outside of this forum
          maggiwuerze@feddit.orgM This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #224

          I fell like the way investments are currently made, coming up with something new is made almost impossible. Most of the hardware is designed with LLMs in mind

          1 Reply Last reply
          0
          • C [email protected]

            I actually was going to link the same one I always do, which I think I heard about through a blog or talk. If that's not good enough, it's easy to devise your own test and put it to an LLM. The way you phrased that makes it sound like you're more interested in ignoring any empirical evidence, though.

            J This user is from outside of this forum
            J This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #225

            That’s unreal. No, you cannot come up with your own scientific test to determine a language model’s capacity for understanding. You don’t even have access to the “thinking” side of the LLM.

            C 1 Reply Last reply
            1
            • J [email protected]

              That’s unreal. No, you cannot come up with your own scientific test to determine a language model’s capacity for understanding. You don’t even have access to the “thinking” side of the LLM.

              C This user is from outside of this forum
              C This user is from outside of this forum
              [email protected]
              wrote on last edited by [email protected]
              #226

              You can devise a task it couldn't have seen in the training data, I mean. Building a comprehensive argument out of them requires a lot more work and time.

              You don’t even have access to the “thinking” side of the LLM.

              Obviously, that goes for the natural intelligences too, so it's not really a fair thing to require.

              1 Reply Last reply
              0
              Reply
              • Reply as topic
              Log in to reply
              • Oldest to Newest
              • Newest to Oldest
              • Most Votes


              • Login

              • Login or register to search.
              • First post
                Last post
              0
              • Categories
              • Recent
              • Tags
              • Popular
              • World
              • Users
              • Groups