Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Lemmy Shitpost
  3. AGI achieved 🤖

AGI achieved 🤖

Scheduled Pinned Locked Moved Lemmy Shitpost
lemmyshitpost
142 Posts 69 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • L [email protected]

    interesting

    T This user is from outside of this forum
    T This user is from outside of this forum
    [email protected]
    wrote on last edited by [email protected]
    #18

    I'm not involved in LLM, but apparently the way it works is that the sentence is broken into words and each word has assigned unique number and that's how the information is stored. So LLM never sees the actual word.

    C driving_crooner@lemmy.eco.brD 2 Replies Last reply
    3
    • E [email protected]

      The end is never the end The end is never the end The end is never the end The end is never the end The end is never the end The end is never the end The end is never the end The end is never the end

      W This user is from outside of this forum
      W This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #19

      Ah a fellow stanley parable enjoyer, love to see it!

      *the end is never the end is never the end

      1 Reply Last reply
      2
      • T [email protected]

        I'm not involved in LLM, but apparently the way it works is that the sentence is broken into words and each word has assigned unique number and that's how the information is stored. So LLM never sees the actual word.

        C This user is from outside of this forum
        C This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #20

        Adding to this, each word and words around it are given a statistical percentage. In other words, what are the odds that word 1 and word 2 follow each other? You scale that out for each word in a sentence and you can see that LLMs are just huge math equations that put words together based on their statistical probability.

        This is key because, I can't emphasize this enough, AI does not think. We (humans) anamorphize them, giving them human characteristics when they are little more than number crunchers.

        1 Reply Last reply
        4
        • A [email protected]

          Agi lost

          xavier666@lemm.eeX This user is from outside of this forum
          xavier666@lemm.eeX This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #21

          Henceforth, AGI should be called "almost general intelligence"

          cyrano@lemmy.dbzer0.comC 1 Reply Last reply
          2
          • V [email protected]

            Biggest threat to humanity

            I This user is from outside of this forum
            I This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #22

            I know there’s no logic, but it’s funny to imagine it’s because it’s pronounced Mrs. Sippy

            J S merc@sh.itjust.worksM 3 Replies Last reply
            22
            • xavier666@lemm.eeX [email protected]

              Henceforth, AGI should be called "almost general intelligence"

              cyrano@lemmy.dbzer0.comC This user is from outside of this forum
              cyrano@lemmy.dbzer0.comC This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #23

              Happy cake day 🍰

              xavier666@lemm.eeX 1 Reply Last reply
              0
              • cyrano@lemmy.dbzer0.comC [email protected]
                This post did not contain any content.
                abfarid@startrek.websiteA This user is from outside of this forum
                abfarid@startrek.websiteA This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #24

                I get the meme aspect of this. But just to be clear, it was never fair to judge LLMs for specifically this. The LLM doesn't even see the letters in the words, as every word is broken down into tokens, which are numbers. I suppose with a big enough corpus of data it might eventually extrapolate which words have which letter from texts describing these words, but normally it shouldn't be expected.

                cyrano@lemmy.dbzer0.comC K abfarid@startrek.websiteA zacryon@feddit.orgZ 4 Replies Last reply
                5
                • cyrano@lemmy.dbzer0.comC [email protected]

                  Happy cake day 🍰

                  xavier666@lemm.eeX This user is from outside of this forum
                  xavier666@lemm.eeX This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #25

                  Thanks! Time really flies.

                  1 Reply Last reply
                  0
                  • cyrano@lemmy.dbzer0.comC [email protected]
                    This post did not contain any content.
                    R This user is from outside of this forum
                    R This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #26

                    It's funny how people always quickly point out that an LLM wasn't made for this, and then continue to shill it for use cases it wasn't made for either (The "intelligence" part of AI, for starters)

                    G B R underpantsweevil@lemmy.worldU softestsapphic@lemmy.worldS 6 Replies Last reply
                    28
                    • I [email protected]

                      I know there’s no logic, but it’s funny to imagine it’s because it’s pronounced Mrs. Sippy

                      J This user is from outside of this forum
                      J This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #27

                      And if it messed up on the other word, we could say because it’s pronounced Louisianer.

                      1 Reply Last reply
                      1
                      • cyrano@lemmy.dbzer0.comC [email protected]

                        Next step how many r in Lollapalooza

                        Q This user is from outside of this forum
                        Q This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #28

                        cyrano@lemmy.dbzer0.comC eager_eagle@lemmy.worldE 2 Replies Last reply
                        3
                        • Q [email protected]

                          cyrano@lemmy.dbzer0.comC This user is from outside of this forum
                          cyrano@lemmy.dbzer0.comC This user is from outside of this forum
                          [email protected]
                          wrote on last edited by [email protected]
                          #29

                          Try it with o3 maybe it needs time to think 😝

                          1 Reply Last reply
                          4
                          • L [email protected]

                            I don't get it

                            K This user is from outside of this forum
                            K This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #30

                            https://www.youtube.com/shorts/7pQrMAekdn4

                            1 Reply Last reply
                            0
                            • abfarid@startrek.websiteA [email protected]

                              I get the meme aspect of this. But just to be clear, it was never fair to judge LLMs for specifically this. The LLM doesn't even see the letters in the words, as every word is broken down into tokens, which are numbers. I suppose with a big enough corpus of data it might eventually extrapolate which words have which letter from texts describing these words, but normally it shouldn't be expected.

                              cyrano@lemmy.dbzer0.comC This user is from outside of this forum
                              cyrano@lemmy.dbzer0.comC This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #31

                              True and I agree with you yet we are being told all job are going to disappear, AGI is coming tomorrow, etc. As usual the truth is more balanced

                              1 Reply Last reply
                              7
                              • abfarid@startrek.websiteA [email protected]

                                I get the meme aspect of this. But just to be clear, it was never fair to judge LLMs for specifically this. The LLM doesn't even see the letters in the words, as every word is broken down into tokens, which are numbers. I suppose with a big enough corpus of data it might eventually extrapolate which words have which letter from texts describing these words, but normally it shouldn't be expected.

                                K This user is from outside of this forum
                                K This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #32

                                I've actually messed with this a bit. The problem is more that it can't count to begin with. If you ask it to spell out each letter individually (ie each letter will be its own token), it still gets the count wrong.

                                abfarid@startrek.websiteA 1 Reply Last reply
                                2
                                • cyrano@lemmy.dbzer0.comC [email protected]

                                  Next step how many r in Lollapalooza

                                  A This user is from outside of this forum
                                  A This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #33

                                  Apparently, this robot is japanese.

                                  jballs@sh.itjust.worksJ 1 Reply Last reply
                                  5
                                  • R [email protected]

                                    It's funny how people always quickly point out that an LLM wasn't made for this, and then continue to shill it for use cases it wasn't made for either (The "intelligence" part of AI, for starters)

                                    G This user is from outside of this forum
                                    G This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #34

                                    Fair point, but a big part of "intelligence" tasks are memorization.

                                    B 1 Reply Last reply
                                    0
                                    • R [email protected]

                                      It's funny how people always quickly point out that an LLM wasn't made for this, and then continue to shill it for use cases it wasn't made for either (The "intelligence" part of AI, for starters)

                                      B This user is from outside of this forum
                                      B This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #35

                                      It's marketed like its AGI, so we should treat it like AGI to show that it isn't AGI. Lots of people buy the bullshit

                                      merc@sh.itjust.worksM 1 Reply Last reply
                                      7
                                      • G [email protected]

                                        Fair point, but a big part of "intelligence" tasks are memorization.

                                        B This user is from outside of this forum
                                        B This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #36

                                        Computers for all intents are purposes have perfect recall so since it was trained on a large data set it would have much better intelligence. But in reality what we consider intelligence is extrapolating from existing knowledge which is what “AI” has shown to be pretty shit at

                                        G 1 Reply Last reply
                                        2
                                        • R [email protected]

                                          It's funny how people always quickly point out that an LLM wasn't made for this, and then continue to shill it for use cases it wasn't made for either (The "intelligence" part of AI, for starters)

                                          R This user is from outside of this forum
                                          R This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #37

                                          There are different types of Artificial intelligences. Counter-Strike 1.6 bots, by definition, were AI. They even used deep learning to figure out new maps.

                                          O 1 Reply Last reply
                                          3
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups