the beautiful code
-
4o has been able to do this for months.
wrote on last edited by [email protected]I tried, it can't get trough four lines without messing up. Unless I give it tasks that are so stupendously simple that I'm faster typing them myself while watching tv
-
Uh yeah, like all the time. Anyone who says otherwise really hasn’t tried recently. I know it’s a meme that AI can’t code (and still in many cases that’s true, eg. I don’t have the AI do anything with OpenCV or complex math) but it’s very routine these days for common use cases like web development.
You must be a big fan of boilerplate
-
To be fair, if I wrote 3000 new lines of code in one shot, it probably wouldn’t run either.
LLMs are good for simple bits of logic under around 200 lines of code, or things that are strictly boilerplate. People who are trying to force it to do things beyond that are just being silly.
Perhaps 5 LOC. Maybe 3. And even then I'll analyze every single character in wrote. And then I will in fact find bugs. Most often it hallucinates some functions that would be fantastic to use - if they existed.
-
Code that works is also just text.
It is text, but not just text
-
I use ChatGPT for Go programming all the time and it rarely has problems, I think Go is more niche than Kotlin
I get a bit frustrated at it trying to replicate everyone else's code in my code base. Once my project became large enough, I felt it necessary to implement my own error handling instead of go's standard, which was not sufficient for me anymore. Copilot will respect that for a while, until I switch to a different file. At that point it will try to force standard go errors everywhere.
-
This post did not contain any content.wrote on last edited by [email protected]
Its like having a junior developer with a world of confidence just change shit and spend hours breaking things and trying to fix them, while we pay big tech for the privilege of watching the chaos.
I asked chat gpt to give me a simple squid proxy config today that blocks everything except https. It confidently gave me one but of course it didnt work. It let through http and despite many attempts to get a working config that did that, it just failed.
So yeah in the end i have to learn squid syntax anyway, which i guess is fine, but I spent hours trying to get a working config because we pay for chat gpt to do exactly that....
-
You must be a big fan of boilerplate
Not sure what you mean, boilerplate code is one of the things AI is good at.
Take a straightforward Django project for example. Given a models.py file, AI can easily write the corresponding admin file, or a RESTful API file. That’s generally just tedious boilerplate work that requires no decision making - perfect for an AI.
More than that and you are probably babysitting the AI so hard that it is faster to just write it yourself.
-
Perhaps 5 LOC. Maybe 3. And even then I'll analyze every single character in wrote. And then I will in fact find bugs. Most often it hallucinates some functions that would be fantastic to use - if they existed.
My guess is what's going on is there's tons of psuedo code out there that looks like it's a real language but has functions that don't exist as placeholders and the LLM noticed the pattern to the point where it just makes up functions, not realizing they need to be implemented (because LLMs don't realize things but just pattern match very complex patterns).
-
well, it only took 2 years to go from the cursed will smith eating spaghetti video to veo3 which can make completely lifelike videos with audio. so who knows what the future holds
The cursed Will Smith eating spaghetti wasn't the best video AI model available at the time, just what was available for consumers to run on their own hardware at the time. So while the rate of improvement in AI image/video generation is incredible, it's not quite as incredible as that viral video would suggest
-
This is interesting, I would be quite impressed if this PR got merged without additional changes.
I am genuinely curious and no judgement at all, since you mentioned that you are not a rust/GTK expert, are you able to read and and have a decent understanding of the output code?
For example, in the
sway.rs
file, you uncommented a piece of code about floating nodes inget_all_windows
function, do you know why it is uncommented? (again, not trying to judge; it is a genuine question. I also don't know rust or GTK, just curious.This is interesting, I would be quite impressed if this PR got merged without additional changes.
We'll see. Whether it gets merged in any form, it's still a big win for me because I finally was able to get some changes implemented that I had been wanting for a couple years.
are you able to read and and have a decent understanding of the output code?
Yes. I know other coding languages and CSS. Sometimes Claude generated code that was correct but I thought it was awkward or poor, so I had it revise. For example, I wanted to handle a boolean case and it added three booleans and a function for that. I said no, you can use a single boolean for all that. Another time it duplicated a bunch of code for the single and multi-monitor cases and I had it consolidate it.
In one case, It got stuck debugging and I was able to help isolate where the error was through testing. Once I suggested where to look harder, it was able to find a subtle issue that I couldn't spot myself. The labels were appearing far too small at one point, but I couldn't see that Claude had changed any code that should affect the label size. It turned out two data structures hadn't been merged correctly, so that default values weren't getting overridden correctly. It was the sort of issue I could see a human dev introducing on the first pass.
do you know why it is uncommented?
Yes, that's the fix for supporting floating windows. The author reported that previously there was a problem with the z-index of the labels on these windows, so that's apparently why it was implemented but commented out. But it seems due to other changes, that problem no longer exists. I was able to test that labels on floating windows now work correctly.
Through the process, I also became more familiar with Rust tooling and Rust itself.
-
Its like having a junior developer with a world of confidence just change shit and spend hours breaking things and trying to fix them, while we pay big tech for the privilege of watching the chaos.
I asked chat gpt to give me a simple squid proxy config today that blocks everything except https. It confidently gave me one but of course it didnt work. It let through http and despite many attempts to get a working config that did that, it just failed.
So yeah in the end i have to learn squid syntax anyway, which i guess is fine, but I spent hours trying to get a working config because we pay for chat gpt to do exactly that....
Man, I can't wait to try out generative AI to generate config files for mission critical stuff!
Imagine paying all of us devops wankers when my idiot boss can just ask Chat GPT to sort all this legacy mess we're juggling with on the daily! -
This post did not contain any content.
I've used it extensively, almost $100 in credits, and generally it could one shot everything I threw at it. However: I gave it architectural instructions and told it to use test driven development and what test suite to use. Without the tests yeah it wouldn't work, and a decent amount of the time is cleaning up mistakes the tests caught. The same can be said for humans, though.
-
Conversely, code that works is also text
But not just text
Also that's not converse to what the parent comment said
-
This post did not contain any content.
Laugh it up while you can.
We're in the "haha it can't draw hands!" phase of coding.
-
Laugh it up while you can.
We're in the "haha it can't draw hands!" phase of coding.
someone drank the koolaid.
LLMs will never code for two reasons.
one, because they only regurgitate facsimiles of code. this is because the models are trained to ingest content and provide an interpretation of the collection of their content.
software development is more than that and requires strategic thought and conceptualization, both of which are decades away from AI at best.
two, because the prevalence of LLM generated code is destroying the training data used to build models. think of it like making a copy of a copy of a copy, et cetera.
the more popular it becomes the worse the training data becomes. the worse the training data becomes the weaker the model. the weaker the model, the less likely it will see any real use.
so yeah. we're about 100 years from the whole "it can't draw its hands" stage because it doesn't even know what hands are.
-
Laugh it up while you can.
We're in the "haha it can't draw hands!" phase of coding.
AI bad.
But also, video AI started with will Will Smith eating spaghetti just a couple years ago.We keep talking about AI doing complex tasks right now and it's limitations, then extrapolating its development linearly. It's not linear and it's not in one direction. It's a exponential and rhizomatic process. Humans always over-estimate (ignoring hard limits) and under-estimate (thinking linearly) how these things go. With rocketships, with internet/social media, and now with AI.
-
I've used it extensively, almost $100 in credits, and generally it could one shot everything I threw at it. However: I gave it architectural instructions and told it to use test driven development and what test suite to use. Without the tests yeah it wouldn't work, and a decent amount of the time is cleaning up mistakes the tests caught. The same can be said for humans, though.
How can it pass if it hasn't had lessons.. Well said. Ooh I wonder if lecture footage would be able to teach AI, or audio in from tutors..
-
This post did not contain any content.
Honest question: I haven't used AI much. Are there any AIs or IDEs that can reliably rename a variable across all instances in a medium sized Python project? I don't mean easy stuff that an editor can do (e.g. rename QQQ in all instances and get lucky that there are no conflicts). I mean be able to differentiate between local and/or library variables so it doesn't change them, only the correct versions.
-
The theory of knowledge (epistemology) is a distinct and storied area of philosophy, not a debate about semantics.
There remains to this day strong philosophical debate on how we can be sure we really "know" anything at all, and thought experiments such as the Chinese Room illustrate that "knowing" is far, far more complex than we might believe.
For instance, is it simply following a set path like a river in a gorge? Is it ever actually "considering" anything, or just doing what it's told?
No one cares about the definition of knowledge to this extent except for philosophers. The person who originally used the word "know" most definitely didn't give a single shit about the philosophical perspective. Therefore, you shitting yourself a word not being used exactly as you'd like instead of understanding the usage in the context is very much semantics.
-
With all due respect, how can you weigh in on programming so confidently when you admit to not being a programmer?
People tend to despise or evangelize LLMs. To me, github copilot has a decent amount of utility. I only use the auto-complete feature which does things like save me from typing 2-5 predictable lines of code that devs tend to type all the time. Instead of typing it all, I press tab. It's just a time saver. I have never used it like "write me a script or a function that does x" like some people do. I am not interested in that as it seems like a sad crutch that I'd need to customize so much anyway that I may as well skip that step.
Having said that, I'm noticing the copilot autocomplete seems to be getting worst over time. I'm not sure why it worsening, but if it ever feels not worth it anymore I'll drop it, no harm no foul. The binary thinkers tend to think you're either a good dev who despises all forms of AI or you're an idiot who tries to have a robot write all your code for you. As a dev for the past 20 years, I see no reason to choose between those two opposites. It can be useful in some contexts.
PS. did you try the eslint 8 -> 9 migration tool? If your config was simple enough for it, it likely would've done all or almost all the work for you... It fully didn't work for me. I had to resolve several errors, because I tend to add several custom plugins, presets, and rules that differ across projects.
wrote on last edited by [email protected]Sorry, the language my original post might seem confrontational, but that is not my intension; I m trying to find value in LLM, since people are excited for it.
I am not a professional programmer nor do I program any industrial sized project at the moment. I am a computer scientist, and my current research project do not involve much programming. But I do teach programming to undergrad and master students, so I want to understand what is a good usecase for this technology, and when can I expect it to be helpful.
Indeed, I am frustrated by this technology, and that might shifted my language further than I intended to. When everyone is promoting this as a magically helpful tool for CS and math, yet I fail to see any good applications for either in my work, despite going back to it every couple month or so.
I did try @eslint/migrate-config, unfortunately it added a good amount of bloat and ends up not working.
So I just gived up and read the doc.