the beautiful code
-
It doesn't 'know' anything. It is glorified text autocomplete.
The current AI is intelligent like how Hoverboards hover.
Semantics
-
This is a philosophical discussion and I doubt you are educated or experienced enough to contribute anything worthwhile to it.
I asked ChatDVP for a response to your post and it said you weren't funny.
-
Semantics
Sementics
-
No the spell just fizzled. In my experience it happens far less often if you start with an Abra kabara and end it with an Alakazam!
Zojak Quapaj!
-
I asked ChatDVP for a response to your post and it said you weren't funny.
I can tell you're a member of the next generation.
Gonna ignore you now.
-
The image is taken from Zhihu, a Chinese Quora-like site.
The prompt is talking about give a design of a certain app, and the response seems to talk about some suggested pages. So it doesn't seem to reflect the text.
But this in general aligns with my experience coding with llm. I was trying to upgrade my eslint from 8 to 9, and ask chatgpt to convert my eslint file, and it proceed to spit out complete garbage.
I thought this would be a good task for llm because eslint config is very common and well-documented, and the transformation is very mechanical, but it just cannot do it. So I proceed to read the documents and finished the migration in a couple hour...
I use it sometimes, usually just to create boilerplate. Actual functionality it's hit or miss, and often it ends up taking more time to fix than to write myself.
-
I wouldn't say it's accurate that this was a "mechanical" upgrade, having done it a few times. They even have a migration tool which you'd think could fully do the upgrade but out of the probably 4-5 projects I've upgraded, the migration tool always produced a config that errored and needed several obscure manual changes to get working. All that to say it seems like a particularly bad candidate for llms
No, still "perfect" for llms. There's nuance, seeing patterns being used, it should be able to handle it perfectly. Enough people on stack overflow asked enough questions, if AI is like Google and Microsoft claim it is, it should have handled it
-
I asked ChatGPT with help about bare metal 32-bit ARM (For the Pi Zero W) C/ASM, emulated in QEMU for testing, and after the third iteration of "use printf for output" -> "there's no printf with bare metal as target" -> "use solution X" -> "doesn't work" -> "ude printf for output" ... I had enough.
Yeah you can tell it just ratholes on trying to force one concept to work rather than realizing it's not the correct concept to begin with
-
This post did not contain any content.
To be fair, if I wrote 3000 new lines of code in one shot, it probably wouldn’t run either.
LLMs are good for simple bits of logic under around 200 lines of code, or things that are strictly boilerplate. People who are trying to force it to do things beyond that are just being silly.
-
No, still "perfect" for llms. There's nuance, seeing patterns being used, it should be able to handle it perfectly. Enough people on stack overflow asked enough questions, if AI is like Google and Microsoft claim it is, it should have handled it
I searched this issue and didn't find anything very helpful. The new config format can be done many slightly different ways and there are a lot of variables in how your plugins and presets can be. It made perfect sense to me that the LLM couldn't do this upgrade for op. Since one tiny mistake and it won't work at all and usually gives a weird error.
-
This is a philosophical discussion and I doubt you are educated or experienced enough to contribute anything worthwhile to it.
Insulting, but also correct. What "knowing" something even means has a long philosophical history.
-
Semantics
Not even remotely.
-
Welp. Its actually very in line with the late stage capitalist system. All polish, no innovation.
Awwwww snap look at this limp dick future we got going on here.
-
Code that does not work is just text.
Conversely, code that works is also text
-
To be fair, if I wrote 3000 new lines of code in one shot, it probably wouldn’t run either.
LLMs are good for simple bits of logic under around 200 lines of code, or things that are strictly boilerplate. People who are trying to force it to do things beyond that are just being silly.
You managed to get an ai to do 200 lines of code and it actually compiled?
-
Conversely, code that works is also text
But working code can be made into numbers.
-
Try to get one of these LLMs to update a package.json.
ones that can run cli tools do great, they just use npm
-
I’ve never thought of it that way. I’m going to add copy writer to my resume.
Maybe fiction writer as well
-
You managed to get an ai to do 200 lines of code and it actually compiled?
4o has been able to do this for months.
-
But working code can be made into numbers.
But text is also numbers