Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Lemmy Shitpost
  3. AGI achieved 🤖

AGI achieved 🤖

Scheduled Pinned Locked Moved Lemmy Shitpost
lemmyshitpost
142 Posts 69 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • K [email protected]

    I've actually messed with this a bit. The problem is more that it can't count to begin with. If you ask it to spell out each letter individually (ie each letter will be its own token), it still gets the count wrong.

    abfarid@startrek.websiteA This user is from outside of this forum
    abfarid@startrek.websiteA This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #68

    In my experience, when using reasoning models, it can count, but not very consistently. I've tried random assortments of letters and it can count them correctly sometimes. It seems to have much harder time when the same letter repeats many times, perhaps because those are tokenized irregularly.

    1 Reply Last reply
    0
    • eager_eagle@lemmy.worldE [email protected]

      are you sure?

      T This user is from outside of this forum
      T This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #69

      Sorry, that was Claude 3.7, not ChatGPT 4o

      https://github.com/elder-plinius/CL4R1T4S/blob/d9a004b5a29395675c5a548acfc386459f71cd14/ANTHROPIC/Claude_Sonnet_3.7_New.txt#L92

      eager_eagle@lemmy.worldE 1 Reply Last reply
      1
      • cyrano@lemmy.dbzer0.comC [email protected]
        This post did not contain any content.
        S This user is from outside of this forum
        S This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #70

        Singularity is here

        1 Reply Last reply
        0
        • S [email protected]

          You might just love Blind Sight. Here, they're trying to decide if an alien life form is sentient or a Chinese Room:

          "Tell me more about your cousins," Rorschach sent.

          "Our cousins lie about the family tree," Sascha replied, "with nieces and nephews and Neandertals. We do not like annoying cousins."

          "We'd like to know about this tree."

          Sascha muted the channel and gave us a look that said Could it be any more obvious? "It couldn't have parsed that. There were three linguistic ambiguities in there. It just ignored them."

          "Well, it asked for clarification," Bates pointed out.

          "It asked a follow-up question. Different thing entirely."

          Bates was still out of the loop. Szpindel was starting to get it, though.. .

          C This user is from outside of this forum
          C This user is from outside of this forum
          [email protected]
          wrote on last edited by [email protected]
          #71

          Blindsight is such a great novel. It has not one, not two but three great sci-fi concepts rolled into one book.

          One is artificial intelligence (the ship's captain is an AI), the second is alien life so vastly different it appears incomprehensible to human minds. And last but not least, and the most wild, vampires as a evolutionary branch of humanity that died out and has been recreated in the future.

          T 1 Reply Last reply
          1
          • T [email protected]

            Sorry, that was Claude 3.7, not ChatGPT 4o

            https://github.com/elder-plinius/CL4R1T4S/blob/d9a004b5a29395675c5a548acfc386459f71cd14/ANTHROPIC/Claude_Sonnet_3.7_New.txt#L92

            eager_eagle@lemmy.worldE This user is from outside of this forum
            eager_eagle@lemmy.worldE This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #72

            ah, that's reasonable though, considering LLMs don't really "see" characters, it's kind of impressive this works sometimes

            1 Reply Last reply
            0
            • S [email protected]

              No. Artificial Intelligence has to be imitating intelligent behavior - such as the ghosts imitating how, ostensibly, a ghost trapped in a maze and hungry for yellow circular flesh would behave, and how CS1.6 bots imitate the behavior of intelligent players. They artificially reproduce intelligent behavior.

              Which means LLMs are very much AI. They are not, however, AGI.

              softestsapphic@lemmy.worldS This user is from outside of this forum
              softestsapphic@lemmy.worldS This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #73

              No, the logic for a Pac Man ghost is a solid state machine

              Stupid people attributing intelligence to something that is probably not is a shameful hill to die on.

              Your god is just an autocomplete bot that you refuse to learn about outside the hype bubble

              S 1 Reply Last reply
              0
              • R [email protected]

                https://en.wikipedia.org/wiki/Shibboleth

                I’ve heard “squirrel” was used to trap Germans.

                S This user is from outside of this forum
                S This user is from outside of this forum
                [email protected]
                wrote on last edited by [email protected]
                #74

                I wonder if any of the Axis even bothered to have such a system to check for Americans.

                "Bawn-jehr-no"

                R 1 Reply Last reply
                1
                • cyrano@lemmy.dbzer0.comC [email protected]
                  This post did not contain any content.
                  L This user is from outside of this forum
                  L This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #75

                  AI is amazing, we're so fucked.

                  /s

                  1 Reply Last reply
                  0
                  • cyrano@lemmy.dbzer0.comC [email protected]
                    This post did not contain any content.
                    Q This user is from outside of this forum
                    Q This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #76

                    I really like checking these myself to make sure it’s true. I WAS NOT DISAPPOINTED!

                    (Total Rs is 8. But the LOGIC ChatGPT pulls out is ……. remarkable!)

                    zacryon@feddit.orgZ I S anunusualrelic@lemmy.worldA 4 Replies Last reply
                    3
                    • G [email protected]

                      They don't. They can save information on drives, but searching is expensive and fuzzy search is a mystery.

                      Just because you can save a mp3 without losing data does not mean you can save the entire Internet in 400gb and search within an instant.

                      B This user is from outside of this forum
                      B This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #77

                      Which is why it doesn’t search within an instant and it uses a bunch of energy and needs to rely on evaporative cooling to stop overheating the servers

                      1 Reply Last reply
                      0
                      • S [email protected]

                        I wonder if any of the Axis even bothered to have such a system to check for Americans.

                        "Bawn-jehr-no"

                        R This user is from outside of this forum
                        R This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #78

                        I speak Italian first-best.

                        1 Reply Last reply
                        1
                        • S [email protected]

                          Obligatory 'lore dump' on the word lollapalooza:

                          That word was a common slang term in the 1930s/40s American lingo that meant... essentially a very raucous, lively party.

                          ::: spoiler Note/Rant on the meaning of this term

                          The current merriam webster and dictionary.com definitions of this term meaning 'an outstanding or exceptional or extreme thing' are wrong, they are too broad.

                          While historical usage varied, it almost always appeared as a noun describing a gathering of many people, one that was so lively or spectacular that you would be exhausted after attending it.

                          When it did not appear as a noun describing a lively, possibly also 'star-studded' or extravagant, party, it appeared as a term for some kind of action that would cause you to be bamboozled, discombobulated... similar to 'that was a real humdinger of a blahblah' or 'that blahblah was a real doozy'... which ties into the effects of having been through the 'raucous party' meaning of lolapalooza.

                          :::

                          So... in WW2, in the Pacific theatre... many US Marines were often engaged in brutal, jungle combat, often at night, and they adopted a system of basically verbal identification challenge checks if they noticed someone creeping up on their foxholes at night.

                          An example of this system used in the European theatre, I believe by the 101st and 82nd airborne, was the challenge 'Thunder!' to which the correct response was 'Flash!'.

                          In the Pacific theatre... the Marines adopted a challenge / response system... where the correct response was 'Lolapalooza'...

                          Because native born Japanese speakers are taught a phoneme that is roughly in between and 'r' and an 'l' ... and they very often struggle to say 'Lolapalooza' without a very noticable accent, unless they've also spent a good deal of time learning spoken English (or some other language with distinct 'l' and 'r' phonemes), which very few Japanese did in the 1940s.

                          ::: spoiler racist and nsfw historical example of / evidence for this

                          https://www.ep.tc/howtospotajap/howto06.html

                          :::

                          Now, some people will say this is a total myth, others will say it is not.

                          My Grandpa who served in the Pacific Theatre during WW2 told me it did happen, though he was Navy and not a Marine... but the other stories about this I've always heard that say it did happen, they all say it happened with the Marines.

                          My Grandpa is also another source for what 'lolapalooza' actually means.

                          A This user is from outside of this forum
                          A This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #79

                          I'm still puzzled by the idea of what mess this war was if at times you had someone still not clearly identifiable, but that close you can do a sheboleth check on them, and that at any moment you or the other could be shot dead.

                          Also, the current conflict of Russia vs Ukraine seems to invent ukrainian 'паляница' as a check, but as I had no connection to actual ukrainians and their UAF, I can't say if that's not entirely localized to the internet.

                          S 1 Reply Last reply
                          0
                          • jballs@sh.itjust.worksJ [email protected]

                            I'm going to hell for laughing at that

                            A This user is from outside of this forum
                            A This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #80

                            Don't be. Although there are millions of corpses behind each WW2 joke, getting it means you are personally aware of that, and it means something. 'Those who don't know shit about the past struggles are to reiterate them' and all that.

                            1 Reply Last reply
                            0
                            • cyrano@lemmy.dbzer0.comC [email protected]
                              This post did not contain any content.
                              M This user is from outside of this forum
                              M This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #81

                              We gotta raise the bar, so they keep struggling to make it “better”

                              ::: spoiler My attempt

                              0000000000000000
                              0000011111000000
                              0000111111111000
                              0000111111100000
                              0001111111111000
                              0001111111111100
                              0001111111111000
                              0000011111110000
                              0000111111000000
                              0001111111100000
                              0001111111100000
                              0001111111100000
                              0001111111100000
                              0000111111000000
                              0000011110000000
                              0000011110000000
                              

                              Btw, I refuse to give my money to AI bros, so I don’t have the “latest and greatest”

                              :::

                              I 1 Reply Last reply
                              2
                              • R [email protected]

                                https://en.wikipedia.org/wiki/Shibboleth

                                I’ve heard “squirrel” was used to trap Germans.

                                merc@sh.itjust.worksM This user is from outside of this forum
                                merc@sh.itjust.worksM This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #82

                                If you've ever heard Germans try to pronounce "squirrel", it's hilarious. I've known many extremely bilingual Germans who couldn't pronounce it at all. It came out sounding roughly like "squall", or they'd over-pronounce the "r" and it would be "squi-rall"

                                R 1 Reply Last reply
                                1
                                • R [email protected]

                                  It's funny how people always quickly point out that an LLM wasn't made for this, and then continue to shill it for use cases it wasn't made for either (The "intelligence" part of AI, for starters)

                                  merc@sh.itjust.worksM This user is from outside of this forum
                                  merc@sh.itjust.worksM This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #83

                                  then continue to shill it for use cases it wasn't made for either

                                  The only thing it was made for is "spicy autocomplete".

                                  1 Reply Last reply
                                  3
                                  • merc@sh.itjust.worksM [email protected]

                                    If you've ever heard Germans try to pronounce "squirrel", it's hilarious. I've known many extremely bilingual Germans who couldn't pronounce it at all. It came out sounding roughly like "squall", or they'd over-pronounce the "r" and it would be "squi-rall"

                                    R This user is from outside of this forum
                                    R This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #84

                                    Sqverrrrl.

                                    merc@sh.itjust.worksM 1 Reply Last reply
                                    0
                                    • underpantsweevil@lemmy.worldU [email protected]

                                      LLM wasn’t made for this

                                      There's a thought experiment that challenges the concept of cognition, called The Chinese Room. What it essentially postulates is a conversation between two people, one of whom is speaking Chinese and getting responses in Chinese. And the first speaker wonders "Does my conversation partner really understand what I'm saying or am I just getting elaborate stock answers from a big library of pre-defined replies?"

                                      The LLM is literally a Chinese Room. And one way we can know this is through these interactions. The machine isn't analyzing the fundamental meaning of what I'm saying, it is simply mapping the words I've input onto a big catalog of responses and giving me a standard output. In this case, the problem the machine is running into is a legacy meme about people miscounting the number of "r"s in the word Strawberry. So "2" is the stock response it knows via the meme reference, even though a much simpler and dumber machine that was designed to handle this basic input question could have come up with the answer faster and more accurately.

                                      When you hear people complain about how the LLM "wasn't made for this", what they're really complaining about is their own shitty methodology. They build a glorified card catalog. A device that can only take inputs, feed them through a massive library of responses, and sift out the highest probability answer without actually knowing what the inputs or outputs signify cognitively.

                                      Even if you want to argue that having a natural language search engine is useful (damn, wish we had a tool that did exactly this back in August of 1996, amirite?), the implementation of the current iteration of these tools is dogshit because the developers did a dogshit job of sanitizing and rationalizing their library of data. Also, incidentally, why Deepseek was running laps around OpenAI and Gemini as of last year.

                                      Imagine asking a librarian "What was happening in Los Angeles in the Summer of 1989?" and that person fetching you back a stack of history textbooks, a stack of Sci-Fi screenplays, a stack of regional newspapers, and a stack of Iron-Man comic books all given equal weight? Imagine hearing the plot of the Terminator and Escape from LA intercut with local elections and the Loma Prieta earthquake.

                                      That's modern LLMs in a nutshell.

                                      merc@sh.itjust.worksM This user is from outside of this forum
                                      merc@sh.itjust.worksM This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #85

                                      Imagine asking a librarian "What was happening in Los Angeles in the Summer of 1989?" and that person fetching you ... That's modern LLMs in a nutshell.

                                      I agree, but I think you're still being too generous to LLMs. A librarian who fetched all those things would at least understand the question. An LLM is just trying to generate words that might logically follow the words you used.

                                      IMO, one of the key ideas with the Chinese Room is that there's an assumption that the computer / book in the Chinese Room experiment has infinite capacity in some way. So, no matter what symbols are passed to it, it can come up with an appropriate response. But, obviously, while LLMs are incredibly huge, they can never be infinite. As a result, they can often be "fooled" when they're given input that semantically similar to a meme, joke or logic puzzle. The vast majority of the training data that matches the input is the meme, or joke, or logic puzzle. LLMs can't reason so they can't distinguish between "this is just a rephrasing of that meme" and "this is similar to that meme but distinct in an important way".

                                      J 1 Reply Last reply
                                      1
                                      • R [email protected]

                                        then 14b, man sooo close...

                                        merc@sh.itjust.worksM This user is from outside of this forum
                                        merc@sh.itjust.worksM This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #86

                                        And people are trusting these things to do jobs / parts of jobs that humans used to do.

                                        J 1 Reply Last reply
                                        1
                                        • I [email protected]

                                          I know there’s no logic, but it’s funny to imagine it’s because it’s pronounced Mrs. Sippy

                                          merc@sh.itjust.worksM This user is from outside of this forum
                                          merc@sh.itjust.worksM This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #87

                                          How do you pronounce "Mrs" so that there's an "r" sound in it?

                                          U I 2 Replies Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups