the beautiful code
-
This is interesting, I would be quite impressed if this PR got merged without additional changes.
We'll see. Whether it gets merged in any form, it's still a big win for me because I finally was able to get some changes implemented that I had been wanting for a couple years.
are you able to read and and have a decent understanding of the output code?
Yes. I know other coding languages and CSS. Sometimes Claude generated code that was correct but I thought it was awkward or poor, so I had it revise. For example, I wanted to handle a boolean case and it added three booleans and a function for that. I said no, you can use a single boolean for all that. Another time it duplicated a bunch of code for the single and multi-monitor cases and I had it consolidate it.
In one case, It got stuck debugging and I was able to help isolate where the error was through testing. Once I suggested where to look harder, it was able to find a subtle issue that I couldn't spot myself. The labels were appearing far too small at one point, but I couldn't see that Claude had changed any code that should affect the label size. It turned out two data structures hadn't been merged correctly, so that default values weren't getting overridden correctly. It was the sort of issue I could see a human dev introducing on the first pass.
do you know why it is uncommented?
Yes, that's the fix for supporting floating windows. The author reported that previously there was a problem with the z-index of the labels on these windows, so that's apparently why it was implemented but commented out. But it seems due to other changes, that problem no longer exists. I was able to test that labels on floating windows now work correctly.
Through the process, I also became more familiar with Rust tooling and Rust itself.
Thank you! This is very helpful.
-
Honest question: I haven't used AI much. Are there any AIs or IDEs that can reliably rename a variable across all instances in a medium sized Python project? I don't mean easy stuff that an editor can do (e.g. rename QQQ in all instances and get lucky that there are no conflicts). I mean be able to differentiate between local and/or library variables so it doesn't change them, only the correct versions.
most IDEs are pretty decent at it if you configure them correctly. I used intelliJ and it knows the difference. use the refactor feature and it'll crawl references, not just rename all instances.
-
Honest question: I haven't used AI much. Are there any AIs or IDEs that can reliably rename a variable across all instances in a medium sized Python project? I don't mean easy stuff that an editor can do (e.g. rename QQQ in all instances and get lucky that there are no conflicts). I mean be able to differentiate between local and/or library variables so it doesn't change them, only the correct versions.
I use pycharm for this and in general it does a great job. At work we've got some massive repos and it'll handle it fine.
The "find" tab shows where it'll make changes and you can click "don't change anything in this directory"
-
No one cares about the definition of knowledge to this extent except for philosophers. The person who originally used the word "know" most definitely didn't give a single shit about the philosophical perspective. Therefore, you shitting yourself a word not being used exactly as you'd like instead of understanding the usage in the context is very much semantics.
When you debate whether a being truly knows something or not, you are, in fact, engaging in the philosophy of epistemology. You can no more avoid epistemology when discussing knowledge than you can avoid discussing physics when describing the flight of a baseball.
-
Honest question: I haven't used AI much. Are there any AIs or IDEs that can reliably rename a variable across all instances in a medium sized Python project? I don't mean easy stuff that an editor can do (e.g. rename QQQ in all instances and get lucky that there are no conflicts). I mean be able to differentiate between local and/or library variables so it doesn't change them, only the correct versions.
Not reliably, no. Python is too dynamic to do that kind of thing without solving general program equivalence which is undecidable.
Use a static language, problem solved.
-
I asked ChatGPT with help about bare metal 32-bit ARM (For the Pi Zero W) C/ASM, emulated in QEMU for testing, and after the third iteration of "use printf for output" -> "there's no printf with bare metal as target" -> "use solution X" -> "doesn't work" -> "ude printf for output" ... I had enough.
wrote on last edited by [email protected]QEMU makes it pretty painless to hook up gdb just FYI; you should look into that. I think you can also have it provide a memory mapped UART for I/O which you can use with newlib to get printf debugging
-
QEMU makes it pretty painless to hook up gdb just FYI; you should look into that. I think you can also have it provide a memory mapped UART for I/O which you can use with newlib to get printf debugging
The latter is what I tried, and also kinda wanted ChatGPT to do, which it refused
-
This post did not contain any content.
All programs can be written with on less line of code.
All programs have at least one bug.By the logical consequences of these axioms every program can be reduced to one line of code - that doesn't work.
One day AI will get there.
-
Sorry, the language my original post might seem confrontational, but that is not my intension; I m trying to find value in LLM, since people are excited for it.
I am not a professional programmer nor do I program any industrial sized project at the moment. I am a computer scientist, and my current research project do not involve much programming. But I do teach programming to undergrad and master students, so I want to understand what is a good usecase for this technology, and when can I expect it to be helpful.
Indeed, I am frustrated by this technology, and that might shifted my language further than I intended to. When everyone is promoting this as a magically helpful tool for CS and math, yet I fail to see any good applications for either in my work, despite going back to it every couple month or so.
I did try @eslint/migrate-config, unfortunately it added a good amount of bloat and ends up not working.
So I just gived up and read the doc.
Gotcha. No worries. I figured you were coming in good faith but wasn't certain. Who is pushing llm's for programming that hard? In my bubble, which often includes Lemmy, most people HATE them for all uses. I get that tech bros and linked in crazies probably push this tech for coding a lot, but outside of that, most devs I know IRL either are lukewarm or dislike llm's for dev work.
-
Honest question: I haven't used AI much. Are there any AIs or IDEs that can reliably rename a variable across all instances in a medium sized Python project? I don't mean easy stuff that an editor can do (e.g. rename QQQ in all instances and get lucky that there are no conflicts). I mean be able to differentiate between local and/or library variables so it doesn't change them, only the correct versions.
Itellij is actually pretty good at this. Besides that, cursor or windsurf should be able to. I was using cursor for a while and when I needed to reactor something, it was pretty good at picking that up. It kept crashing on me though, so I am now trying windsurf and some other options. I am missing the auto complete features in cursor though as I would use this all the time to fill out boilerplate stuff as I write.
The one key difference in cursor and windsurf when compared to other products is that it will look at the entire context again for any changes or at least a little bit of it. You make a change, it looks if it needs to make changes elsewhere.
I still don't trust AI to do much though, but it's an excellent helper
-
Honest question: I haven't used AI much. Are there any AIs or IDEs that can reliably rename a variable across all instances in a medium sized Python project? I don't mean easy stuff that an editor can do (e.g. rename QQQ in all instances and get lucky that there are no conflicts). I mean be able to differentiate between local and/or library variables so it doesn't change them, only the correct versions.
Okay, I realize I'm that person, but for those interested:
tree
,cat
andsed
get the job done nicely.And... it's my nap time, now. Please keep the Internet working, while I'm napping. I have grown fond of parts of it. Goodnight.
-
Honest question: I haven't used AI much. Are there any AIs or IDEs that can reliably rename a variable across all instances in a medium sized Python project? I don't mean easy stuff that an editor can do (e.g. rename QQQ in all instances and get lucky that there are no conflicts). I mean be able to differentiate between local and/or library variables so it doesn't change them, only the correct versions.
For the most part "Rename symbol" in VSCode will work well. But it's limited by scope.
-
For the most part "Rename symbol" in VSCode will work well. But it's limited by scope.
Yeah, I'm looking for something that would understand the operation (? insert correct term here) of the language well enough to rename intelligently.
-
I’ve heard that a Claude 4 model generating code for an infinite amount of time will eventually simulate a monkey typing out Shakespeare
It will have consumed the GigaWattHours capacity of a few suns and all the moisture in our solar system, but by Jeeves, we'll get there!
...but it won't be that impressive once we remember concepts like "monkey, typing, Shakespeare" were already embedded in the training data.
-
Practically all LLMs aren't good for any logic. Try to play ASCII tic tac toe against it. All GPT models lost against my four year old niece and I wouldn't trust her writing production code
Once a single model (doesn't have to be a LLM) can beat Stockfish in chess, AlphaGo in Go, my niece in tic tac toe and can one-shot (on the surface, scratch-pad allowed) a Rust program that compiles and works, than we can start thinking about replacing engineers.
Just take a look at the dotnet runtime source code where Microsoft employees currently try to work with copilot, which writes PRs with errors like forgetting to add files to projects. Write code that doesn't compile, fix symptoms instead of underlying problems, etc. (just take a look yourself).
I don't say that AI (especially AGI) can't replace humans. It definitely can and will, it's just a matter of time, but state of the Art LLMs are basically just extremely good "search engines" or interactive versions of "stack overflow" but not good enough to do real "thinking tasks".
extremely good "search engines" or interactive versions of "stack overflow"
Which is such a decent use of them! I've used it on my own hardware a few times just to say "Hey give me a comparison of these things", or "How would I write a function that does this?" Or "Please explain this more simply...more simply....more simply..."
I see it as a search engine that connects nodes of concepts together, basically.
And it's great for that. And it's impressive!
But all the hype monkeys out there are trying to pedestal it like some kind of techno-super-intelligence, completely ignoring what it is good for in favor of "It'll replace all human coders" fever dreams.
-
someone drank the koolaid.
LLMs will never code for two reasons.
one, because they only regurgitate facsimiles of code. this is because the models are trained to ingest content and provide an interpretation of the collection of their content.
software development is more than that and requires strategic thought and conceptualization, both of which are decades away from AI at best.
two, because the prevalence of LLM generated code is destroying the training data used to build models. think of it like making a copy of a copy of a copy, et cetera.
the more popular it becomes the worse the training data becomes. the worse the training data becomes the weaker the model. the weaker the model, the less likely it will see any real use.
so yeah. we're about 100 years from the whole "it can't draw its hands" stage because it doesn't even know what hands are.
wrote on last edited by [email protected]This is just your ego talking. You can't stand the idea that a computer could be better than you at something you devoted your life to. You're not special. Coding is not special. It happened to artists, chess players, etc. It'll happen to us too.
I'll listen to experts who study the topic over an internet rando. AI model capabilities as yet show no signs of slowing their exponential growth.
-
It will have consumed the GigaWattHours capacity of a few suns and all the moisture in our solar system, but by Jeeves, we'll get there!
...but it won't be that impressive once we remember concepts like "monkey, typing, Shakespeare" were already embedded in the training data.
If we just asked Jeeves in the first place we wouldn't be in this mess.
-
This post did not contain any content.
Ai code is specifically annoying because it looks like it would work, but its just plausible bullshit.
-
Honest question: I haven't used AI much. Are there any AIs or IDEs that can reliably rename a variable across all instances in a medium sized Python project? I don't mean easy stuff that an editor can do (e.g. rename QQQ in all instances and get lucky that there are no conflicts). I mean be able to differentiate between local and/or library variables so it doesn't change them, only the correct versions.
Find and Replace?
-
Its like having a junior developer with a world of confidence just change shit and spend hours breaking things and trying to fix them, while we pay big tech for the privilege of watching the chaos.
I asked chat gpt to give me a simple squid proxy config today that blocks everything except https. It confidently gave me one but of course it didnt work. It let through http and despite many attempts to get a working config that did that, it just failed.
So yeah in the end i have to learn squid syntax anyway, which i guess is fine, but I spent hours trying to get a working config because we pay for chat gpt to do exactly that....
It confidently gave me one
IMO, that's one of the biggest "sins" of the current LLMs, they're trained to generate words that make them sound confident.