the beautiful code
-
I use pycharm for this and in general it does a great job. At work we've got some massive repos and it'll handle it fine.
The "find" tab shows where it'll make changes and you can click "don't change anything in this directory"
Yes, all of JetBrains' tools handle project-wide renames practically perfectly, even in weirder things like Angular projects where templates may reference variables.
-
Find and Replace?
that will catch too many false positives
-
It confidently gave me one
IMO, that's one of the biggest "sins" of the current LLMs, they're trained to generate words that make them sound confident.
They aren’t explicitly trained to sound confident, that’s just how users tend to talk. You don’t often see “I don’t know but you can give this a shot” on Stack Overflow, for instance. Even the incorrect answers coming from users are presented confidently.
Funnily enough, lack of confidence in response is something I don’t think LLMs are currently capable of, since it would require contextual understanding of both the question, and the answer being given.
-
Ai code is specifically annoying because it looks like it would work, but its just plausible bullshit.
Well I've got the name for my autobiography now.
-
This post did not contain any content.wrote on last edited by [email protected]
Watching the serious people trying to use AI to code gives me the same feeling as the cybertruck people exploring the limits of their car. XD
"It's terrible and I should hate it, but gosh it it isn't just so cool"
I wish i could get so excited over disappointing garbage
-
They aren’t explicitly trained to sound confident, that’s just how users tend to talk. You don’t often see “I don’t know but you can give this a shot” on Stack Overflow, for instance. Even the incorrect answers coming from users are presented confidently.
Funnily enough, lack of confidence in response is something I don’t think LLMs are currently capable of, since it would require contextual understanding of both the question, and the answer being given.
SO answers and questions are usually edited multiple times to sound professional, confident, and be correct.
-
All programs can be written with on less line of code.
All programs have at least one bug.By the logical consequences of these axioms every program can be reduced to one line of code - that doesn't work.
One day AI will get there.
All programs can be written with on less line of code.
All programs have at least one bug.The humble "Hello world" would like a word.
-
Trying to treat the discussion as a philisophical one is giving more nuance to 'knowing' than it deserves. An LLM can spit out a sentence that looks like it knows something, but it is just pattern matching frequency of word associations which is mimicry, not knowledge.
wrote on last edited by [email protected]I'll preface by saying I agree that AI doesn't really "know" anything and is just a randomised Chinese Room. However...
Acting like the entire history of the philosophy of knowledge is just some attempt make "knowing" seem more nuanced is extremely arrogant. The question of what knowledge is is not just relevant to the discussion of AI, but is fundamental in understanding how our own minds work. When you form arguments about how AI doesn't know things, you're basing it purely on the human experience of knowing things. But that calls into question how you can be sure you even know anything at all. We can't just take it for granted that our perceptions are a perfect example of knowledge, we have to interrogate that and see what it is that we can do that AIs can't- or worse, discover that our assumptions about knowledge, and perhaps even of our own abilities, are flawed.
-
They aren’t explicitly trained to sound confident, that’s just how users tend to talk. You don’t often see “I don’t know but you can give this a shot” on Stack Overflow, for instance. Even the incorrect answers coming from users are presented confidently.
Funnily enough, lack of confidence in response is something I don’t think LLMs are currently capable of, since it would require contextual understanding of both the question, and the answer being given.
No, I'm sure you're wrong. There's a certain cheerful confidence that you get from every LLM response. It's this upbeat "can do attitude" brimming with confidence mixed with subservience that is definitely not the standard way people communicate on the Internet, let alone Stack Overflow. Sure, sometimes people answering questions are overconfident, but it's often an arrogant kind of confidence, not a subservient kind of confidence you get from LLMs.
I don't think an LLM can sound like it lacks in confidence for the right reasons, but it can definitely pull off lack of confidence if it's prompted correctly. To actually lack confidence it would have to have an understanding of the situation. But, to imitate lack of confidence all it would need to do is draw on all the training data it has where the response to a question is one where someone lacks confidence.
Similarly, it's not like it actually has confidence normally. It's just been trained / meta-prompted to emit an answer in a style that mimics confidence.
-
I can't speak for Lemmy but I'm personally not against LLMs and also use them on a regular basis. As Pennomi said (and I totally agree with that) LLMs are a tool and we should use that tool for things it's good for. But "thinking" is not one of the things LLMs are good at. And software engineering requires a ton of thinking. Of course there are things (boilerplate, etc.) where no real thinking is required, but non-AI tools like code completion/intellisense, macros, code snippets/templates can help with that and never was I bottle-necked by my typing speed when writing software.
It was always the time I needed to plan the structure of the software, design good and correct abstractions and the overall architecture. Exactly the things LLMs can't do.
Copilot even fails to stick to coding style from the same file, just because it saw a different style more often during training.
"I'm not again LLMs I just never say anything useful about them and constantly point out how I can't use them." The other guy is right and you just prove his point.
-
This is interesting, I would be quite impressed if this PR got merged without additional changes.
We'll see. Whether it gets merged in any form, it's still a big win for me because I finally was able to get some changes implemented that I had been wanting for a couple years.
are you able to read and and have a decent understanding of the output code?
Yes. I know other coding languages and CSS. Sometimes Claude generated code that was correct but I thought it was awkward or poor, so I had it revise. For example, I wanted to handle a boolean case and it added three booleans and a function for that. I said no, you can use a single boolean for all that. Another time it duplicated a bunch of code for the single and multi-monitor cases and I had it consolidate it.
In one case, It got stuck debugging and I was able to help isolate where the error was through testing. Once I suggested where to look harder, it was able to find a subtle issue that I couldn't spot myself. The labels were appearing far too small at one point, but I couldn't see that Claude had changed any code that should affect the label size. It turned out two data structures hadn't been merged correctly, so that default values weren't getting overridden correctly. It was the sort of issue I could see a human dev introducing on the first pass.
do you know why it is uncommented?
Yes, that's the fix for supporting floating windows. The author reported that previously there was a problem with the z-index of the labels on these windows, so that's apparently why it was implemented but commented out. But it seems due to other changes, that problem no longer exists. I was able to test that labels on floating windows now work correctly.
Through the process, I also became more familiar with Rust tooling and Rust itself.
Holy shit someone on here that know how to use them. Surprised you haven't been downvoted into oblivion yet.
-
I'll preface by saying I agree that AI doesn't really "know" anything and is just a randomised Chinese Room. However...
Acting like the entire history of the philosophy of knowledge is just some attempt make "knowing" seem more nuanced is extremely arrogant. The question of what knowledge is is not just relevant to the discussion of AI, but is fundamental in understanding how our own minds work. When you form arguments about how AI doesn't know things, you're basing it purely on the human experience of knowing things. But that calls into question how you can be sure you even know anything at all. We can't just take it for granted that our perceptions are a perfect example of knowledge, we have to interrogate that and see what it is that we can do that AIs can't- or worse, discover that our assumptions about knowledge, and perhaps even of our own abilities, are flawed.
wrote on last edited by [email protected]Acting like the entire history of the philosophy of knowledge is just some attempt make “knowing” seem more nuanced is extremely arrogant.
That is not what I said. In fact, it is the opposite of what I said.
I said that treating the discussion of LLMs as a philosophical one is giving 'knowing' in the discussion of LLMs more nuance than it deserves.
-
you're a fool. chess has rules and is boxed into those rules. of course it's prime for AI.
art is subjective, I don't see the appeal personally, but I'm more of a baroque or renaissance fan.
I doubt you will but if you believe in what you say then this will only prove you right and me wrong.
what is this?
once you classify it, why did you classify it that way? is it because you personally have one? did you have to rule out what it isn't before you could identify what it could be? did you compare it to other instances of similar subjects?
now, try to classify it as someone who doesn't have these. someone who has never seen one before. someone who hasn't any idea what it could be used for. how would you identify what it is? how it's used? are there more than one?
now, how does AI classify it? does it comprehend what it is, even though it lacks a physical body? can it understand what it's used for? how it feels to have one?
my point is, AI is at least 100 years away from instinctively knowing what a hand is. I doubt you had to even think about it and your brain automatically identified it as a hand, the most basic and fundamentally important features of being a human.
if AI cannot even instinctively identify a hand as a hand, it's not possible for it to write software, because writing is based on human cognition and is entirely driven on instinct.
like a master sculptor, we carve out the words from the ether to perform tasks that not only are required, but unseen requirements that lay beneath the surface that are only known through nuance. just like the sculptor that has to follow the veins within the marble.
the AI you know today cannot do that, and frankly the hardware of today can't even support AI in achieving that goal, and it never will because of people like you promoting a half baked toy as a tool to replace nuanced human skills. only for this toy to poison pill the only training data available, that's been created through nuanced human skills.
I'll just add, I may be an internet rando to you but you and your source are just randos to me. I'm speaking from my personal experience in writing software for over 25 years along with cleaning up all this AI code bullshit for at least two years.
AI cannot code. AI writes regurgitated facsimiles of software based on it's limited dataset. it's impossible for it to make decisions based on human nuance and can only make calculated assumptions based on the available dataset.
I don't know how much clearer I have to be at how limited AI is.
LMFAO. He's right about your ego.
-
"I'm not again LLMs I just never say anything useful about them and constantly point out how I can't use them." The other guy is right and you just prove his point.
I don't see how that follows because I did point out in another comment that they are very useful if used like search engines or interactive stack overflow or Wikipedia.
LLMs are extremely knowledgeable (as in they "know" a lot) but are completely dumb.
If you want to anthropomorphise it, current LLMs are like a person that read the entire internet, remembered a lot of it, but still is too stupid to win/draw tic tac toe.
So there is value in LLMs, if you use them for their knowledge.
-
All programs can be written with on less line of code.
All programs have at least one bug.The humble "Hello world" would like a word.
You can fit an awful lot of Perl into one line too if you minimize it. It'll be completely unreadable to most anyone, but it'll run
-
This is just your ego talking. You can't stand the idea that a computer could be better than you at something you devoted your life to. You're not special. Coding is not special. It happened to artists, chess players, etc. It'll happen to us too.
I'll listen to experts who study the topic over an internet rando. AI model capabilities as yet show no signs of slowing their exponential growth.
wrote on last edited by [email protected]Coding isn't special you are right, but it's a thinking task and LLMs (including reasoning models) don't know how to think. LLMs are knowledgeable because they remembered a lot of the data and patterns of the training data, but they didn't learn to think from that. That's why LLMs can't replace humans.
That does certainly not mean that software can't be smarter than humans. It will and it's just a matter of time, but to get there we likely have AGI first.
To show you that LLMs can't think, try to play ASCII tic tac toe (XXO) against all those models. They are completely dumb even though it "saw" the entire Wikipedia article on how xxo works during training, that it's a solved game, different strategies and how to consistently draw - but still it can't do it. It loses most games against my four year old niece and she doesn't even play good/perfect xxo.
I wouldn't trust anything, which is claimed to do thinking tasks, that can't even beat my niece in xxo, with writing firmware for cars or airplanes.
LLMs are great if used like search engines or interactive versions of Wikipedia/Stack overflow. But they certainly can't think. For now, but likely we'll need different architectures for real thinking models than LLMs have.
-
Ai code is specifically annoying because it looks like it would work, but its just plausible bullshit.
And that's what happens when you spend a trillion dollars on an autocomplete: amazing at making things look like whatever it's imitating, but with zero understanding of why the original looked that way.
-
No, I'm sure you're wrong. There's a certain cheerful confidence that you get from every LLM response. It's this upbeat "can do attitude" brimming with confidence mixed with subservience that is definitely not the standard way people communicate on the Internet, let alone Stack Overflow. Sure, sometimes people answering questions are overconfident, but it's often an arrogant kind of confidence, not a subservient kind of confidence you get from LLMs.
I don't think an LLM can sound like it lacks in confidence for the right reasons, but it can definitely pull off lack of confidence if it's prompted correctly. To actually lack confidence it would have to have an understanding of the situation. But, to imitate lack of confidence all it would need to do is draw on all the training data it has where the response to a question is one where someone lacks confidence.
Similarly, it's not like it actually has confidence normally. It's just been trained / meta-prompted to emit an answer in a style that mimics confidence.
wrote on last edited by [email protected]ChatGPT went through a phase of overly bubbly upbeat responses, they chilled it out tho. Not sure if that’s what you saw.
One thing is for sure with all of them, they never say “I don’t know” because such responses aren’t likely to be found in any training data!
It’s probably part of some system level prompt guidance too, like you say, to be confident.
-
All programs can be written with on less line of code.
All programs have at least one bug.By the logical consequences of these axioms every program can be reduced to one line of code - that doesn't work.
One day AI will get there.
On one line of code you say?
*search & replaces all line breaks with spaces*
-
All programs can be written with on less line of code.
All programs have at least one bug.The humble "Hello world" would like a word.
Just to boast my old timer credentials.
There is an utility program in IBM’s mainframe operating system, z/OS, that has been there since the 60s.
It has just one assembly code instruction: a BR 14, which means basically ‘return’.
The first version was bugged and IBM had to issue a PTF (patch) to fix it.