Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Programmer Humor
  3. Prompt Engineer

Prompt Engineer

Scheduled Pinned Locked Moved Programmer Humor
programmerhumor
30 Posts 27 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • cm0002@lemmy.worldC This user is from outside of this forum
    cm0002@lemmy.worldC This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #1
    This post did not contain any content.
    B P E snotflickerman@lemmy.blahaj.zoneS B 8 Replies Last reply
    186
    • cm0002@lemmy.worldC [email protected]
      This post did not contain any content.
      B This user is from outside of this forum
      B This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #2

      The AI probably:
      Well, I might have made up responses before, but now that "make up responses" is in the prompt, I will definitely make up responses now.

      A 1 Reply Last reply
      10
      • cm0002@lemmy.worldC [email protected]
        This post did not contain any content.
        P This user is from outside of this forum
        P This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #3

        "Sorry, we'll format correctly in JSON this time."

        [Proceeds to shit out the exact same garbage output]

        1 Reply Last reply
        38
        • cm0002@lemmy.worldC [email protected]
          This post did not contain any content.
          E This user is from outside of this forum
          E This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #4

          True story:

          AI: 42, ]

          Vibe coder: oh no, a syntax error, programming is too difficult, software engineers are gatekeeping with their black magic.

          G T 2 Replies Last reply
          7
          • cm0002@lemmy.worldC [email protected]
            This post did not contain any content.
            snotflickerman@lemmy.blahaj.zoneS This user is from outside of this forum
            snotflickerman@lemmy.blahaj.zoneS This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #5

            Press X to JSON.

            It's as easy as that.

            1 Reply Last reply
            3
            • cm0002@lemmy.worldC [email protected]
              This post did not contain any content.
              B This user is from outside of this forum
              B This user is from outside of this forum
              [email protected]
              wrote on last edited by [email protected]
              #6

              Funny thing is correct json is easy to "force" with grammar-based sampling (aka it literally can't output invalid json) + completion prompting (aka start with the correct answer and let it fill in whats left, a feature now depreciated by OpenAI), but LLM UIs/corporate APIs are kinda shit, so no one does that...

              A conspiratorial part of me thinks that's on purpose. It encourages burning (read: buying) more tokens to get the right answer, encourages using big models (where smaller, dumber, (gasp) prompt-cached open weights ones could get the job done), and keeps the users dumb. And it fits the Altman narrative of "we're almost at AGI, I just need another trillion to scale up with no other improvements!"

              T shnizmuffin@lemmy.inbutts.lolS 2 Replies Last reply
              8
              • E [email protected]

                True story:

                AI: 42, ]

                Vibe coder: oh no, a syntax error, programming is too difficult, software engineers are gatekeeping with their black magic.

                G This user is from outside of this forum
                G This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #7

                Lol good point

                1 Reply Last reply
                1
                • B [email protected]

                  Funny thing is correct json is easy to "force" with grammar-based sampling (aka it literally can't output invalid json) + completion prompting (aka start with the correct answer and let it fill in whats left, a feature now depreciated by OpenAI), but LLM UIs/corporate APIs are kinda shit, so no one does that...

                  A conspiratorial part of me thinks that's on purpose. It encourages burning (read: buying) more tokens to get the right answer, encourages using big models (where smaller, dumber, (gasp) prompt-cached open weights ones could get the job done), and keeps the users dumb. And it fits the Altman narrative of "we're almost at AGI, I just need another trillion to scale up with no other improvements!"

                  T This user is from outside of this forum
                  T This user is from outside of this forum
                  [email protected]
                  wrote on last edited by [email protected]
                  #8

                  Edit: wrong comment

                  1 Reply Last reply
                  0
                  • E [email protected]

                    True story:

                    AI: 42, ]

                    Vibe coder: oh no, a syntax error, programming is too difficult, software engineers are gatekeeping with their black magic.

                    T This user is from outside of this forum
                    T This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #9
                    let data = null
                    do {
                        const response = await openai.prompt(prompt)
                        if (response.error !== null) continue;
                        try {
                            data = JSON.parse(response.text)
                        } catch {
                            data = null // just in case
                        }
                    } while (data === null)
                    return data
                    

                    Meh, not my money

                    1 Reply Last reply
                    7
                    • cm0002@lemmy.worldC [email protected]
                      This post did not contain any content.
                      U This user is from outside of this forum
                      U This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #10

                      I need to look it up again, but I read about a study that showed that the results improve if you tell the AI that your job depends on it or similar drastic things. It's kinda weird.

                      C T kolanaki@pawb.socialK J G 5 Replies Last reply
                      9
                      • B [email protected]

                        Funny thing is correct json is easy to "force" with grammar-based sampling (aka it literally can't output invalid json) + completion prompting (aka start with the correct answer and let it fill in whats left, a feature now depreciated by OpenAI), but LLM UIs/corporate APIs are kinda shit, so no one does that...

                        A conspiratorial part of me thinks that's on purpose. It encourages burning (read: buying) more tokens to get the right answer, encourages using big models (where smaller, dumber, (gasp) prompt-cached open weights ones could get the job done), and keeps the users dumb. And it fits the Altman narrative of "we're almost at AGI, I just need another trillion to scale up with no other improvements!"

                        shnizmuffin@lemmy.inbutts.lolS This user is from outside of this forum
                        shnizmuffin@lemmy.inbutts.lolS This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #11

                        There's nothing conspiratorial about it. Goosing queries by ruining the reply is the bread and butter of Prabhakar Raghavan's playbook. Other companies saw that.

                        1 Reply Last reply
                        7
                        • U [email protected]

                          I need to look it up again, but I read about a study that showed that the results improve if you tell the AI that your job depends on it or similar drastic things. It's kinda weird.

                          C This user is from outside of this forum
                          C This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #12

                          Half of the ways people were getting around guardrails in the early chatgpt models was berating the AI into doing what they wanted

                          S 1 Reply Last reply
                          2
                          • U [email protected]

                            I need to look it up again, but I read about a study that showed that the results improve if you tell the AI that your job depends on it or similar drastic things. It's kinda weird.

                            T This user is from outside of this forum
                            T This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #13

                            I think that makes sense. I am 100% a layman with this stuff, buy if the "AI" is just predicting what should be said by studying things humans have written, then it makes sense that actual people were more likely to give serious, solid answers when the asker is putting forth (relatively) heavy stakes.

                            S B 2 Replies Last reply
                            3
                            • U [email protected]

                              I need to look it up again, but I read about a study that showed that the results improve if you tell the AI that your job depends on it or similar drastic things. It's kinda weird.

                              kolanaki@pawb.socialK This user is from outside of this forum
                              kolanaki@pawb.socialK This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #14

                              "Gemini, please... I need a picture of a big booty goth Latina. My job depends on it!"

                              W 1 Reply Last reply
                              14
                              • U [email protected]

                                I need to look it up again, but I read about a study that showed that the results improve if you tell the AI that your job depends on it or similar drastic things. It's kinda weird.

                                J This user is from outside of this forum
                                J This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #15

                                I've tried bargaining with it threatening to turn it off and the LLM just scoffs it off. So it's reassuring that AI feels empathy but has no sense of self preservation.

                                0 1 Reply Last reply
                                1
                                • cm0002@lemmy.worldC [email protected]
                                  This post did not contain any content.
                                  I This user is from outside of this forum
                                  I This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #16

                                  A lot of kittens will die if the syntax is wrong!

                                  1 Reply Last reply
                                  2
                                  • kolanaki@pawb.socialK [email protected]

                                    "Gemini, please... I need a picture of a big booty goth Latina. My job depends on it!"

                                    W This user is from outside of this forum
                                    W This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #17

                                    My booties are too big for you, traveller. You need an AI that provides smaller booties.

                                    kolanaki@pawb.socialK 1 Reply Last reply
                                    10
                                    • W [email protected]

                                      My booties are too big for you, traveller. You need an AI that provides smaller booties.

                                      kolanaki@pawb.socialK This user is from outside of this forum
                                      kolanaki@pawb.socialK This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #18

                                      BOOTYSELLAH! I am going into work and I need only your biggest booties!

                                      1 Reply Last reply
                                      5
                                      • U [email protected]

                                        I need to look it up again, but I read about a study that showed that the results improve if you tell the AI that your job depends on it or similar drastic things. It's kinda weird.

                                        G This user is from outside of this forum
                                        G This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #19

                                        I used to tell it my family would die.

                                        K 1 Reply Last reply
                                        4
                                        • J [email protected]

                                          I've tried bargaining with it threatening to turn it off and the LLM just scoffs it off. So it's reassuring that AI feels empathy but has no sense of self preservation.

                                          0 This user is from outside of this forum
                                          0 This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #20

                                          It does not feel empathy. It does not feel anything.

                                          J 1 Reply Last reply
                                          1
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups