Prompt Engineer
-
This post did not contain any content.
-
This post did not contain any content.
The AI probably:
Well, I might have made up responses before, but now that "make up responses" is in the prompt, I will definitely make up responses now. -
This post did not contain any content.
"Sorry, we'll format correctly in JSON this time."
[Proceeds to shit out the exact same garbage output]
-
This post did not contain any content.
True story:
AI:
42, ]
Vibe coder: oh no, a syntax error, programming is too difficult, software engineers are gatekeeping with their black magic.
-
This post did not contain any content.
It's as easy as that.
-
This post did not contain any content.wrote on last edited by [email protected]
Funny thing is correct json is easy to "force" with grammar-based sampling (aka it literally can't output invalid json) + completion prompting (aka start with the correct answer and let it fill in whats left, a feature now depreciated by OpenAI), but LLM UIs/corporate APIs are kinda shit, so no one does that...
A conspiratorial part of me thinks that's on purpose. It encourages burning (read: buying) more tokens to get the right answer, encourages using big models (where smaller, dumber, (gasp) prompt-cached open weights ones could get the job done), and keeps the users dumb. And it fits the Altman narrative of "we're almost at AGI, I just need another trillion to scale up with no other improvements!"
-
True story:
AI:
42, ]
Vibe coder: oh no, a syntax error, programming is too difficult, software engineers are gatekeeping with their black magic.
Lol good point
-
Funny thing is correct json is easy to "force" with grammar-based sampling (aka it literally can't output invalid json) + completion prompting (aka start with the correct answer and let it fill in whats left, a feature now depreciated by OpenAI), but LLM UIs/corporate APIs are kinda shit, so no one does that...
A conspiratorial part of me thinks that's on purpose. It encourages burning (read: buying) more tokens to get the right answer, encourages using big models (where smaller, dumber, (gasp) prompt-cached open weights ones could get the job done), and keeps the users dumb. And it fits the Altman narrative of "we're almost at AGI, I just need another trillion to scale up with no other improvements!"
wrote on last edited by [email protected]Edit: wrong comment
-
True story:
AI:
42, ]
Vibe coder: oh no, a syntax error, programming is too difficult, software engineers are gatekeeping with their black magic.
let data = null do { const response = await openai.prompt(prompt) if (response.error !== null) continue; try { data = JSON.parse(response.text) } catch { data = null // just in case } } while (data === null) return data
Meh, not my money
-
This post did not contain any content.
I need to look it up again, but I read about a study that showed that the results improve if you tell the AI that your job depends on it or similar drastic things. It's kinda weird.
-
Funny thing is correct json is easy to "force" with grammar-based sampling (aka it literally can't output invalid json) + completion prompting (aka start with the correct answer and let it fill in whats left, a feature now depreciated by OpenAI), but LLM UIs/corporate APIs are kinda shit, so no one does that...
A conspiratorial part of me thinks that's on purpose. It encourages burning (read: buying) more tokens to get the right answer, encourages using big models (where smaller, dumber, (gasp) prompt-cached open weights ones could get the job done), and keeps the users dumb. And it fits the Altman narrative of "we're almost at AGI, I just need another trillion to scale up with no other improvements!"
There's nothing conspiratorial about it. Goosing queries by ruining the reply is the bread and butter of Prabhakar Raghavan's playbook. Other companies saw that.
-
I need to look it up again, but I read about a study that showed that the results improve if you tell the AI that your job depends on it or similar drastic things. It's kinda weird.
Half of the ways people were getting around guardrails in the early chatgpt models was berating the AI into doing what they wanted
-
I need to look it up again, but I read about a study that showed that the results improve if you tell the AI that your job depends on it or similar drastic things. It's kinda weird.
I think that makes sense. I am 100% a layman with this stuff, buy if the "AI" is just predicting what should be said by studying things humans have written, then it makes sense that actual people were more likely to give serious, solid answers when the asker is putting forth (relatively) heavy stakes.
-
I need to look it up again, but I read about a study that showed that the results improve if you tell the AI that your job depends on it or similar drastic things. It's kinda weird.
"Gemini, please... I need a picture of a big booty goth Latina. My job depends on it!"
-
I need to look it up again, but I read about a study that showed that the results improve if you tell the AI that your job depends on it or similar drastic things. It's kinda weird.
I've tried bargaining with it threatening to turn it off and the LLM just scoffs it off. So it's reassuring that AI feels empathy but has no sense of self preservation.
-
This post did not contain any content.
A lot of kittens will die if the syntax is wrong!
-
"Gemini, please... I need a picture of a big booty goth Latina. My job depends on it!"
My booties are too big for you, traveller. You need an AI that provides smaller booties.
-
My booties are too big for you, traveller. You need an AI that provides smaller booties.
BOOTYSELLAH! I am going into work and I need only your biggest booties!
-
I need to look it up again, but I read about a study that showed that the results improve if you tell the AI that your job depends on it or similar drastic things. It's kinda weird.
I used to tell it my family would die.
-
I've tried bargaining with it threatening to turn it off and the LLM just scoffs it off. So it's reassuring that AI feels empathy but has no sense of self preservation.
It does not feel empathy. It does not feel anything.