the beautiful code
-
Then I am quite confused what LLM is supposed to help me with. I am not a programmer, and I am certainly not a TypeScript programmer. This is why I postponed my eslint upgrade for half a year, since I don't have a lot of experience in TypeScript, besides one project in my college webdev class.
So if I can sit down for a couple hour to port my rather simple eslint config, which arguably is the most mechanical task I have seen in my limited programming experience, and LLM produce anything close to correct. Then I am rather confused what "real programmers" would use it for...
People here say boilerplate code, but honestly I don't quite recall the last time I need to write a lot of boilerplate code.
I have also tried to use llm to debug SELinux and docker container on my homelab; unfortunately, it is absolutely useless in that as well.
wrote on last edited by [email protected]With all due respect, how can you weigh in on programming so confidently when you admit to not being a programmer?
People tend to despise or evangelize LLMs. To me, github copilot has a decent amount of utility. I only use the auto-complete feature which does things like save me from typing 2-5 predictable lines of code that devs tend to type all the time. Instead of typing it all, I press tab. It's just a time saver. I have never used it like "write me a script or a function that does x" like some people do. I am not interested in that as it seems like a sad crutch that I'd need to customize so much anyway that I may as well skip that step.
Having said that, I'm noticing the copilot autocomplete seems to be getting worst over time. I'm not sure why it worsening, but if it ever feels not worth it anymore I'll drop it, no harm no foul. The binary thinkers tend to think you're either a good dev who despises all forms of AI or you're an idiot who tries to have a robot write all your code for you. As a dev for the past 20 years, I see no reason to choose between those two opposites. It can be useful in some contexts.
PS. did you try the eslint 8 -> 9 migration tool? If your config was simple enough for it, it likely would've done all or almost all the work for you... It fully didn't work for me. I had to resolve several errors, because I tend to add several custom plugins, presets, and rules that differ across projects.
-
This post did not contain any content.
I’ve heard that a Claude 4 model generating code for an infinite amount of time will eventually simulate a monkey typing out Shakespeare
-
This post did not contain any content.
cant wait to see "we use AI agents to generate well structured non-functioning code" with off centered everything and non working embeds on the website
-
well, it only took 2 years to go from the cursed will smith eating spaghetti video to veo3 which can make completely lifelike videos with audio. so who knows what the future holds
wrote on last edited by [email protected]There actually isn't really any doubt that AI (especially AGI) will surpass humans on all thinking tasks unless we have a mass extinction event first. But current LLMs are nowhere close to actual human intelligence.
-
A drill press (or the inventors) don't claim that it can do that, but with LLMs they claim to replace humans on a lot of thinking tasks. They even brag with test benchmarks, claim Bachelor, Master and Phd level intelligence, call them "reasoning" models, but still fail to beat my niece in tic tac toe, which by the way doesn't have a PhD in anything
LLMs are typically good in things that happened a lot during training. If you are writing software there certainly are things which the LLM saw a lot of during training. But this actually is the biggest problem, it will happily generate code that might look ok, even during PR review but might blow up in your face a few weeks later.
If they can't handle things they even saw during training (but sparsely, like tic tac toe) it wouldn't be able to produce code you should use in production. I wouldn't trust any junior dev that doesn't set their O right next to the two Xs.
Sure, the marketing of LLMs is wildly overstated. I would never argue otherwise. This is entirely a red herring, however.
I’m saying you should use the tools for what they’re good at, and don’t use them for what they’re bad at. I don’t see why this is controversial at all. You can personally decide that they are good for nothing. Great! Nobody is forcing you to use AI in your work. (Though if they are, you should find a new employer.)
-
It’s futile even trying to highlight the things LLMs do very well as Lemmy is incredibly biased against them.
I can't speak for Lemmy but I'm personally not against LLMs and also use them on a regular basis. As Pennomi said (and I totally agree with that) LLMs are a tool and we should use that tool for things it's good for. But "thinking" is not one of the things LLMs are good at. And software engineering requires a ton of thinking. Of course there are things (boilerplate, etc.) where no real thinking is required, but non-AI tools like code completion/intellisense, macros, code snippets/templates can help with that and never was I bottle-necked by my typing speed when writing software.
It was always the time I needed to plan the structure of the software, design good and correct abstractions and the overall architecture. Exactly the things LLMs can't do.
Copilot even fails to stick to coding style from the same file, just because it saw a different style more often during training.
-
This post did not contain any content.
Ctrl+A + Del.
So clean.
-
Sure, the marketing of LLMs is wildly overstated. I would never argue otherwise. This is entirely a red herring, however.
I’m saying you should use the tools for what they’re good at, and don’t use them for what they’re bad at. I don’t see why this is controversial at all. You can personally decide that they are good for nothing. Great! Nobody is forcing you to use AI in your work. (Though if they are, you should find a new employer.)
Totally agree with that and I don't think anybody would see that as controversial. LLMs are actually good in a lot of things, but not thinking and typically not if you are an expert. That's why LLMs know more about the anatomy of humans than I do, but probably not more than most people with a medical degree.
-
Well yeah, it’s working from an incomplete knowledge of the code base. If you asked a human to do the same they would struggle.
LLMs work only if they can fit the whole context into their memory, and that means working only in highly limited environments.
No, a human would just find an API that is publically available. And the fact that it knew the static class "Misc" means it knows the api. It just halucinated and responded with bullcrap. The entire concept can be summarized with "I want to color a player's model in GAME using python and SCRIPTING ENGINE".
-
4o has been able to do this for months.
wrote on last edited by [email protected]I tried, it can't get trough four lines without messing up. Unless I give it tasks that are so stupendously simple that I'm faster typing them myself while watching tv
-
Uh yeah, like all the time. Anyone who says otherwise really hasn’t tried recently. I know it’s a meme that AI can’t code (and still in many cases that’s true, eg. I don’t have the AI do anything with OpenCV or complex math) but it’s very routine these days for common use cases like web development.
You must be a big fan of boilerplate
-
To be fair, if I wrote 3000 new lines of code in one shot, it probably wouldn’t run either.
LLMs are good for simple bits of logic under around 200 lines of code, or things that are strictly boilerplate. People who are trying to force it to do things beyond that are just being silly.
Perhaps 5 LOC. Maybe 3. And even then I'll analyze every single character in wrote. And then I will in fact find bugs. Most often it hallucinates some functions that would be fantastic to use - if they existed.
-
Code that works is also just text.
It is text, but not just text
-
I use ChatGPT for Go programming all the time and it rarely has problems, I think Go is more niche than Kotlin
I get a bit frustrated at it trying to replicate everyone else's code in my code base. Once my project became large enough, I felt it necessary to implement my own error handling instead of go's standard, which was not sufficient for me anymore. Copilot will respect that for a while, until I switch to a different file. At that point it will try to force standard go errors everywhere.
-
This post did not contain any content.wrote on last edited by [email protected]
Its like having a junior developer with a world of confidence just change shit and spend hours breaking things and trying to fix them, while we pay big tech for the privilege of watching the chaos.
I asked chat gpt to give me a simple squid proxy config today that blocks everything except https. It confidently gave me one but of course it didnt work. It let through http and despite many attempts to get a working config that did that, it just failed.
So yeah in the end i have to learn squid syntax anyway, which i guess is fine, but I spent hours trying to get a working config because we pay for chat gpt to do exactly that....
-
You must be a big fan of boilerplate
Not sure what you mean, boilerplate code is one of the things AI is good at.
Take a straightforward Django project for example. Given a models.py file, AI can easily write the corresponding admin file, or a RESTful API file. That’s generally just tedious boilerplate work that requires no decision making - perfect for an AI.
More than that and you are probably babysitting the AI so hard that it is faster to just write it yourself.
-
Perhaps 5 LOC. Maybe 3. And even then I'll analyze every single character in wrote. And then I will in fact find bugs. Most often it hallucinates some functions that would be fantastic to use - if they existed.
My guess is what's going on is there's tons of psuedo code out there that looks like it's a real language but has functions that don't exist as placeholders and the LLM noticed the pattern to the point where it just makes up functions, not realizing they need to be implemented (because LLMs don't realize things but just pattern match very complex patterns).
-
well, it only took 2 years to go from the cursed will smith eating spaghetti video to veo3 which can make completely lifelike videos with audio. so who knows what the future holds
The cursed Will Smith eating spaghetti wasn't the best video AI model available at the time, just what was available for consumers to run on their own hardware at the time. So while the rate of improvement in AI image/video generation is incredible, it's not quite as incredible as that viral video would suggest
-
This is interesting, I would be quite impressed if this PR got merged without additional changes.
I am genuinely curious and no judgement at all, since you mentioned that you are not a rust/GTK expert, are you able to read and and have a decent understanding of the output code?
For example, in the
sway.rs
file, you uncommented a piece of code about floating nodes inget_all_windows
function, do you know why it is uncommented? (again, not trying to judge; it is a genuine question. I also don't know rust or GTK, just curious.This is interesting, I would be quite impressed if this PR got merged without additional changes.
We'll see. Whether it gets merged in any form, it's still a big win for me because I finally was able to get some changes implemented that I had been wanting for a couple years.
are you able to read and and have a decent understanding of the output code?
Yes. I know other coding languages and CSS. Sometimes Claude generated code that was correct but I thought it was awkward or poor, so I had it revise. For example, I wanted to handle a boolean case and it added three booleans and a function for that. I said no, you can use a single boolean for all that. Another time it duplicated a bunch of code for the single and multi-monitor cases and I had it consolidate it.
In one case, It got stuck debugging and I was able to help isolate where the error was through testing. Once I suggested where to look harder, it was able to find a subtle issue that I couldn't spot myself. The labels were appearing far too small at one point, but I couldn't see that Claude had changed any code that should affect the label size. It turned out two data structures hadn't been merged correctly, so that default values weren't getting overridden correctly. It was the sort of issue I could see a human dev introducing on the first pass.
do you know why it is uncommented?
Yes, that's the fix for supporting floating windows. The author reported that previously there was a problem with the z-index of the labels on these windows, so that's apparently why it was implemented but commented out. But it seems due to other changes, that problem no longer exists. I was able to test that labels on floating windows now work correctly.
Through the process, I also became more familiar with Rust tooling and Rust itself.
-
Its like having a junior developer with a world of confidence just change shit and spend hours breaking things and trying to fix them, while we pay big tech for the privilege of watching the chaos.
I asked chat gpt to give me a simple squid proxy config today that blocks everything except https. It confidently gave me one but of course it didnt work. It let through http and despite many attempts to get a working config that did that, it just failed.
So yeah in the end i have to learn squid syntax anyway, which i guess is fine, but I spent hours trying to get a working config because we pay for chat gpt to do exactly that....
Man, I can't wait to try out generative AI to generate config files for mission critical stuff!
Imagine paying all of us devops wankers when my idiot boss can just ask Chat GPT to sort all this legacy mess we're juggling with on the daily!