the beautiful code
-
It's not much use with a professional codebase as of now, and I say this as a big proponent of learning FOSS AI to stay ahead of the corpocunts
-
No the spell just fizzled. In my experience it happens far less often if you start with an Abra kabara and end it with an Alakazam!
wrote on last edited by [email protected]Yeah, the Abra kabara init and Alakazam cleanup are an important part, specially until you have become good enough to configure your own init.
There is an alternative init, Abra Kadabra, which automatically adds a cleanup and some general fixes when it detects the end of the spell.
-
I asked ChatGPT with help about bare metal 32-bit ARM (For the Pi Zero W) C/ASM, emulated in QEMU for testing, and after the third iteration of "use printf for output" -> "there's no printf with bare metal as target" -> "use solution X" -> "doesn't work" -> "ude printf for output" ... I had enough.
Did it at least try puts?
-
This post did not contain any content.
Welp. Its actually very in line with the late stage capitalist system. All polish, no innovation.
-
It catches things like spelling errors in variable names, does good autocomplete, and it’s useful to have it look through a file before committing it and creating a pull request.
It’s very useful for throwaway work like writing scripts and automations.
It’s useful not but a 10x multiplier like all the CEOs claim it is.
-
It doesn't 'know' anything. It is glorified text autocomplete.
The current AI is intelligent like how Hoverboards hover.
wrote on last edited by [email protected]Llms are the smartest thing ever on subjects you have no fucking clue on.
On subjects you have at least 1 year experience with it suddenly becomes the dumbest shit youve ever seen. -
It catches things like spelling errors in variable names, does good autocomplete, and it’s useful to have it look through a file before committing it and creating a pull request.
It’s very useful for throwaway work like writing scripts and automations.
It’s useful not but a 10x multiplier like all the CEOs claim it is.
Fully agreed. Everybody is betting it'll get there eventually and trying to jockey for position being ahead of the pack, but at the moment there isn't any guarantee that it'll get to where the corpos are assuming it already is.
Which is not the same as not having better autocomplete/spellcheck/"hey, how do I format this specific thing" tools.
-
This post did not contain any content.
This weekend I successfully used Claude to add three features in a Rust utility I had wanted for a couple years. I had opened issue requests, but no else volunteered. I had tried learning Rust, Wayland and GTK to do it myself, but the docs at the time weren’t great and the learning curve was steep. But Claude figured it all out pretty quick.
-
llms are systems that output human-readable natural language answers, not true answers
And a good part of the time, the answers can often have a… subtly loose relationship with truth
-
It doesn't 'know' anything. It is glorified text autocomplete.
The current AI is intelligent like how Hoverboards hover.
wrote on last edited by [email protected]This is a philosophical discussion and I doubt you are educated or experienced enough to contribute anything worthwhile to it.
-
Fully agreed. Everybody is betting it'll get there eventually and trying to jockey for position being ahead of the pack, but at the moment there isn't any guarantee that it'll get to where the corpos are assuming it already is.
Which is not the same as not having better autocomplete/spellcheck/"hey, how do I format this specific thing" tools.
Yeah, it’s still super useful.
I think the execs want to see dev salaries go to zero, but these tools make more sense as an accelerator, like giving an accountant excel.
I get a bit more done faster, that’s a solid value proposition.
-
It doesn't 'know' anything. It is glorified text autocomplete.
The current AI is intelligent like how Hoverboards hover.
Semantics
-
This is a philosophical discussion and I doubt you are educated or experienced enough to contribute anything worthwhile to it.
I asked ChatDVP for a response to your post and it said you weren't funny.
-
Semantics
Sementics
-
No the spell just fizzled. In my experience it happens far less often if you start with an Abra kabara and end it with an Alakazam!
Zojak Quapaj!
-
I asked ChatDVP for a response to your post and it said you weren't funny.
I can tell you're a member of the next generation.
Gonna ignore you now.
-
The image is taken from Zhihu, a Chinese Quora-like site.
The prompt is talking about give a design of a certain app, and the response seems to talk about some suggested pages. So it doesn't seem to reflect the text.
But this in general aligns with my experience coding with llm. I was trying to upgrade my eslint from 8 to 9, and ask chatgpt to convert my eslint file, and it proceed to spit out complete garbage.
I thought this would be a good task for llm because eslint config is very common and well-documented, and the transformation is very mechanical, but it just cannot do it. So I proceed to read the documents and finished the migration in a couple hour...
I use it sometimes, usually just to create boilerplate. Actual functionality it's hit or miss, and often it ends up taking more time to fix than to write myself.
-
I wouldn't say it's accurate that this was a "mechanical" upgrade, having done it a few times. They even have a migration tool which you'd think could fully do the upgrade but out of the probably 4-5 projects I've upgraded, the migration tool always produced a config that errored and needed several obscure manual changes to get working. All that to say it seems like a particularly bad candidate for llms
No, still "perfect" for llms. There's nuance, seeing patterns being used, it should be able to handle it perfectly. Enough people on stack overflow asked enough questions, if AI is like Google and Microsoft claim it is, it should have handled it
-
I asked ChatGPT with help about bare metal 32-bit ARM (For the Pi Zero W) C/ASM, emulated in QEMU for testing, and after the third iteration of "use printf for output" -> "there's no printf with bare metal as target" -> "use solution X" -> "doesn't work" -> "ude printf for output" ... I had enough.
Yeah you can tell it just ratholes on trying to force one concept to work rather than realizing it's not the correct concept to begin with
-
This post did not contain any content.
To be fair, if I wrote 3000 new lines of code in one shot, it probably wouldn’t run either.
LLMs are good for simple bits of logic under around 200 lines of code, or things that are strictly boilerplate. People who are trying to force it to do things beyond that are just being silly.