WATER!
-
This post did not contain any content.
I tried using Cursor IDE and Claude Sonnet 4 to make an extension for Blender, and it keeps getting to the exact same point (super basic functions) of development, and then constantly breaking it when I try to get it to fine tune what i need to be done... This comic is accurate af.
-
This post did not contain any content.wrote last edited by [email protected]
AI = bad, I know, but do people order "water, please" instead of "a glass of water, please" ?
Unless you want to end up with an expensive bottle of french water instead of a single glass of tap water
-
AI = bad, I know, but do people order "water, please" instead of "a glass of water, please" ?
Unless you want to end up with an expensive bottle of french water instead of a single glass of tap water
I don't know what restaurants you go to, but during my adult life not once "water please" failed to get me a glass of water.
Sometimes the waiter asks if bottled or tap, but thats about it.
-
I don't know what restaurants you go to, but during my adult life not once "water please" failed to get me a glass of water.
Sometimes the waiter asks if bottled or tap, but thats about it.
Well since it's annoying for both the waiter and the customer to have to ask/precise every time if it's bottled or tap, I just always say it directly, and I've seen other people do it
While abroad, I once asked many years ago for water in a restaurant and ended up with a 8$ 1L bottle, I'm not making that mistake again
-
AI = bad, I know, but do people order "water, please" instead of "a glass of water, please" ?
Unless you want to end up with an expensive bottle of french water instead of a single glass of tap water
Hey now french expensive water is really good though! How expensive is it where you live? I get a big (75cl?) bottle of st Pellegrino for like 3.50€ here in France.
-
I don't know what restaurants you go to, but during my adult life not once "water please" failed to get me a glass of water.
Sometimes the waiter asks if bottled or tap, but thats about it.
I went to a restaurant in Dallas near the stadiums where asking for "water, please" got us a glass bottle of whatever rich-people water they served.
Everything was expensive and the food was super whatever. It's called Soy Cowboy, if anybody's curious.
-
AI = bad, I know, but do people order "water, please" instead of "a glass of water, please" ?
Unless you want to end up with an expensive bottle of french water instead of a single glass of tap water
-
From what I understand of LLMs your assessment does seem likely to me. LLMs might actually be pretty accurate when asked to do relatively simpler, shorter tasks.
Yeah I asked it to generate sdks from api documentation and it failed to pull all the routes into methods so its very much temperamental. If there's an easier SDK conversion program that I'm missing I would prefer hard coded logic machines than fuzzy LLMs.
-
Haven't used any coding LLMs. I honestly have no clue about the accuracy of the comic. Can anyone enlighten me?
It's sometimes useful, often obnoxious, sometimes both.
It tends to shine on very blatantly obvious boilerplate stuff that is super easy, but tedious. You can be sloppy with your input and it will fix it up to be reasonable. Even then you've got to be careful, as sometimes what seems blatantly obvious still gets screwed up in weird ways. Even with mistakes, it's sometimes easier to edit that going from scratch.
Using an enabled editor that looks at your activity and suggests little snippets is useful, but can be really annoying when it gets particularly insistent on a bad suggestion and keeps nagging you with "hey look at this, you want to do this right?"
Overall it's merely mildly useful to me, as my career has been significantly about minimizing boilerplate with decent success. However for a lot of developers, there's a ton of stupid boilerplate, owing to language design, obnoxiously verbose things, and inscrutable library documentation. I think that's why some developers are scratching their heads wondering what the supposed big deal is and why some think it's an amazing technology that has largely eliminated the need for them to manually code.
-
The comic is only accurate if you expect it do everything for you, you're bad at communicating, and you're using an old model. Or if you're just unlucky
I'd add it depends also on your field. If you spend a lot of time assembling technically bespoke solutions, but they are broadly consistent with a lot of popular projects, then it can cut through a lot in short order. When I come to a segment like that, LLM tends to go a lot further.
But if you are doing something because you can't find anything vaguely like what you want to do, it tends to only be able to hit like 3 or so lines of useful material a minority of the time. And the bad suggestions can be annoying. Less outright dangerous after you get used to being skeptical by default, but still annoying as it insists on re emphasizing a bad suggestion.
So I can see where it can be super useful, and also how it can seem more trouble than it is worth.
Claude and GPT have been my current experience. The best improvement I've seen is for the suggestions getting shorter. Used to have like 3 maybe useful lines bundled with a further dozen lines of not what I wanted. Now the first three lines might be similar, but it's less likely to suggest a big chunk of code.
Was helping someone the other day and the comic felt pretty accurate. It did exactly the opposite of what the user prompted for. Even after coaxing it to be in the general ballpark, it has about half the generated code being unrelated to the requested task, with side effects that would have seemed functional unless you paid attention and noticed that throughout would have been about 70% lower than you should expect. Was a significant risk as the user was in over their head and unable to understand the suggestions they needed to review, as they were working in a pretty jargon heavy ecosystem (not the AI fault, they had to invoke standard libraries that had incomprehensible jargon heavy syntax)
-
AI = bad, I know, but do people order "water, please" instead of "a glass of water, please" ?
Unless you want to end up with an expensive bottle of french water instead of a single glass of tap water
I definitely often say water, usually because of the way it's asked, usually something like "What would you like to drink today?"
So it's usually just "water" or "I'll have water" or whatever drink in response. Glass of water sounds odd in response to me, especially since sometimes it's green tea which usually wouldn't come in a glass. Or coffee, etc
-
This post did not contain any content.
Water, please.
Here is your cactusWhy are people saying please to robots anyway?
-
I use them frequently, they’re extremely helpful just don’t get it to write everything.
As for the comic, it’s pretty inaccurate. The only one that I find true is the too much water, sometimes the bots like to take … longer methods.
Everyone has different experiences, but it's very hit and miss for me. Sometimes it gives some very useful boiler plate, saving me quite a bit of time, sometimes it hallucinates some insane stuff that isn't related to what I asked or makes functions that don't return, or call each other.
Like defining a function "getTheThing" then later calling "getSomethingElse" that doesn't exist. It's a simple enough error to fix, but sometimes it's so close to "correct" that debugging it takes quite a lot to find, because it looks right.
-
This post did not contain any content.
Selfhost your LLM's
Qwen3:14b is fast, open source and answers code questions with very good accuracy.You only need ollama and a podman container (for openwebUI)
-
Selfhost your LLM's
Qwen3:14b is fast, open source and answers code questions with very good accuracy.You only need ollama and a podman container (for openwebUI)
Frankly, I don't think you seriously tested anything that you've mentioned here.
Nobody's using Qwen because it doesn't do tool calls. Nobody really uses ollama for useful workloads because they don't own the hardware to make it good enough.
That's not to say that I don't want self-hosted models to be good. I absolutely do. But let's be realistic here.
-
This post did not contain any content.
-
Haven't used any coding LLMs. I honestly have no clue about the accuracy of the comic. Can anyone enlighten me?
I find you get much better results if you talk to the LLM like you were instructing a robot, not a human. That means being very specific about everything.
It's the difference between, "I would like water" and, "I would like a glass filled with drinking water and a few cubes of ice mixed in".