Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Programmer Humor
  3. the beautiful code

the beautiful code

Scheduled Pinned Locked Moved Programmer Humor
programmerhumor
226 Posts 135 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • B [email protected]

    For the most part "Rename symbol" in VSCode will work well. But it's limited by scope.

    S This user is from outside of this forum
    S This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #128

    Yeah, I'm looking for something that would understand the operation (? insert correct term here) of the language well enough to rename intelligently.

    1 Reply Last reply
    3
    • L [email protected]

      I’ve heard that a Claude 4 model generating code for an infinite amount of time will eventually simulate a monkey typing out Shakespeare

      M This user is from outside of this forum
      M This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #129

      It will have consumed the GigaWattHours capacity of a few suns and all the moisture in our solar system, but by Jeeves, we'll get there!

      ...but it won't be that impressive once we remember concepts like "monkey, typing, Shakespeare" were already embedded in the training data.

      S 1 Reply Last reply
      7
      • W [email protected]

        Practically all LLMs aren't good for any logic. Try to play ASCII tic tac toe against it. All GPT models lost against my four year old niece and I wouldn't trust her writing production code 🤣

        Once a single model (doesn't have to be a LLM) can beat Stockfish in chess, AlphaGo in Go, my niece in tic tac toe and can one-shot (on the surface, scratch-pad allowed) a Rust program that compiles and works, than we can start thinking about replacing engineers.

        Just take a look at the dotnet runtime source code where Microsoft employees currently try to work with copilot, which writes PRs with errors like forgetting to add files to projects. Write code that doesn't compile, fix symptoms instead of underlying problems, etc. (just take a look yourself).

        I don't say that AI (especially AGI) can't replace humans. It definitely can and will, it's just a matter of time, but state of the Art LLMs are basically just extremely good "search engines" or interactive versions of "stack overflow" but not good enough to do real "thinking tasks".

        M This user is from outside of this forum
        M This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #130

        extremely good "search engines" or interactive versions of "stack overflow"

        Which is such a decent use of them! I've used it on my own hardware a few times just to say "Hey give me a comparison of these things", or "How would I write a function that does this?" Or "Please explain this more simply...more simply....more simply..."

        I see it as a search engine that connects nodes of concepts together, basically.

        And it's great for that. And it's impressive!

        But all the hype monkeys out there are trying to pedestal it like some kind of techno-super-intelligence, completely ignoring what it is good for in favor of "It'll replace all human coders" fever dreams.

        1 Reply Last reply
        2
        • G [email protected]

          someone drank the koolaid.

          LLMs will never code for two reasons.

          one, because they only regurgitate facsimiles of code. this is because the models are trained to ingest content and provide an interpretation of the collection of their content.

          software development is more than that and requires strategic thought and conceptualization, both of which are decades away from AI at best.

          two, because the prevalence of LLM generated code is destroying the training data used to build models. think of it like making a copy of a copy of a copy, et cetera.

          the more popular it becomes the worse the training data becomes. the worse the training data becomes the weaker the model. the weaker the model, the less likely it will see any real use.

          so yeah. we're about 100 years from the whole "it can't draw its hands" stage because it doesn't even know what hands are.

          C This user is from outside of this forum
          C This user is from outside of this forum
          [email protected]
          wrote on last edited by [email protected]
          #131

          This is just your ego talking. You can't stand the idea that a computer could be better than you at something you devoted your life to. You're not special. Coding is not special. It happened to artists, chess players, etc. It'll happen to us too.

          I'll listen to experts who study the topic over an internet rando. AI model capabilities as yet show no signs of slowing their exponential growth.

          G W 2 Replies Last reply
          1
          • M [email protected]

            It will have consumed the GigaWattHours capacity of a few suns and all the moisture in our solar system, but by Jeeves, we'll get there!

            ...but it won't be that impressive once we remember concepts like "monkey, typing, Shakespeare" were already embedded in the training data.

            S This user is from outside of this forum
            S This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #132

            If we just asked Jeeves in the first place we wouldn't be in this mess.

            1 Reply Last reply
            2
            • codiunicorn@programming.devC [email protected]
              This post did not contain any content.
              irelephant@lemm.eeI This user is from outside of this forum
              irelephant@lemm.eeI This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #133

              Ai code is specifically annoying because it looks like it would work, but its just plausible bullshit.

              captain_aggravated@sh.itjust.worksC K P 3 Replies Last reply
              61
              • S [email protected]

                Honest question: I haven't used AI much. Are there any AIs or IDEs that can reliably rename a variable across all instances in a medium sized Python project? I don't mean easy stuff that an editor can do (e.g. rename QQQ in all instances and get lucky that there are no conflicts). I mean be able to differentiate between local and/or library variables so it doesn't change them, only the correct versions.

                irelephant@lemm.eeI This user is from outside of this forum
                irelephant@lemm.eeI This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #134

                Find and Replace?

                S 1 Reply Last reply
                4
                • 1984@lemmy.today1 [email protected]

                  Its like having a junior developer with a world of confidence just change shit and spend hours breaking things and trying to fix them, while we pay big tech for the privilege of watching the chaos.

                  I asked chat gpt to give me a simple squid proxy config today that blocks everything except https. It confidently gave me one but of course it didnt work. It let through http and despite many attempts to get a working config that did that, it just failed.

                  So yeah in the end i have to learn squid syntax anyway, which i guess is fine, but I spent hours trying to get a working config because we pay for chat gpt to do exactly that....

                  merc@sh.itjust.worksM This user is from outside of this forum
                  merc@sh.itjust.worksM This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #135

                  It confidently gave me one

                  IMO, that's one of the biggest "sins" of the current LLMs, they're trained to generate words that make them sound confident.

                  kairubyte@lemmy.dbzer0.comK F 2 Replies Last reply
                  12
                  • C [email protected]

                    This is just your ego talking. You can't stand the idea that a computer could be better than you at something you devoted your life to. You're not special. Coding is not special. It happened to artists, chess players, etc. It'll happen to us too.

                    I'll listen to experts who study the topic over an internet rando. AI model capabilities as yet show no signs of slowing their exponential growth.

                    G This user is from outside of this forum
                    G This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #136

                    you're a fool. chess has rules and is boxed into those rules. of course it's prime for AI.

                    art is subjective, I don't see the appeal personally, but I'm more of a baroque or renaissance fan.

                    I doubt you will but if you believe in what you say then this will only prove you right and me wrong.

                    what is this?

                    1000001583

                    once you classify it, why did you classify it that way? is it because you personally have one? did you have to rule out what it isn't before you could identify what it could be? did you compare it to other instances of similar subjects?

                    now, try to classify it as someone who doesn't have these. someone who has never seen one before. someone who hasn't any idea what it could be used for. how would you identify what it is? how it's used? are there more than one?

                    now, how does AI classify it? does it comprehend what it is, even though it lacks a physical body? can it understand what it's used for? how it feels to have one?

                    my point is, AI is at least 100 years away from instinctively knowing what a hand is. I doubt you had to even think about it and your brain automatically identified it as a hand, the most basic and fundamentally important features of being a human.

                    if AI cannot even instinctively identify a hand as a hand, it's not possible for it to write software, because writing is based on human cognition and is entirely driven on instinct.

                    like a master sculptor, we carve out the words from the ether to perform tasks that not only are required, but unseen requirements that lay beneath the surface that are only known through nuance. just like the sculptor that has to follow the veins within the marble.

                    the AI you know today cannot do that, and frankly the hardware of today can't even support AI in achieving that goal, and it never will because of people like you promoting a half baked toy as a tool to replace nuanced human skills. only for this toy to poison pill the only training data available, that's been created through nuanced human skills.

                    I'll just add, I may be an internet rando to you but you and your source are just randos to me. I'm speaking from my personal experience in writing software for over 25 years along with cleaning up all this AI code bullshit for at least two years.

                    AI cannot code. AI writes regurgitated facsimiles of software based on it's limited dataset. it's impossible for it to make decisions based on human nuance and can only make calculated assumptions based on the available dataset.

                    I don't know how much clearer I have to be at how limited AI is.

                    Z 1 Reply Last reply
                    2
                    • S [email protected]

                      Honest question: I haven't used AI much. Are there any AIs or IDEs that can reliably rename a variable across all instances in a medium sized Python project? I don't mean easy stuff that an editor can do (e.g. rename QQQ in all instances and get lucky that there are no conflicts). I mean be able to differentiate between local and/or library variables so it doesn't change them, only the correct versions.

                      T This user is from outside of this forum
                      T This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #137

                      I'm going to laugh in Java, where this has always been possible and reliable. Not like ai reliable, but expert reliable. Because of static types.

                      1 Reply Last reply
                      8
                      • L [email protected]

                        I use pycharm for this and in general it does a great job. At work we've got some massive repos and it'll handle it fine.

                        The "find" tab shows where it'll make changes and you can click "don't change anything in this directory"

                        isveryloud@lemmy.caI This user is from outside of this forum
                        isveryloud@lemmy.caI This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #138

                        Yes, all of JetBrains' tools handle project-wide renames practically perfectly, even in weirder things like Angular projects where templates may reference variables.

                        A 1 Reply Last reply
                        2
                        • irelephant@lemm.eeI [email protected]

                          Find and Replace?

                          S This user is from outside of this forum
                          S This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #139

                          that will catch too many false positives

                          1 Reply Last reply
                          2
                          • merc@sh.itjust.worksM [email protected]

                            It confidently gave me one

                            IMO, that's one of the biggest "sins" of the current LLMs, they're trained to generate words that make them sound confident.

                            kairubyte@lemmy.dbzer0.comK This user is from outside of this forum
                            kairubyte@lemmy.dbzer0.comK This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #140

                            They aren’t explicitly trained to sound confident, that’s just how users tend to talk. You don’t often see “I don’t know but you can give this a shot” on Stack Overflow, for instance. Even the incorrect answers coming from users are presented confidently.

                            Funnily enough, lack of confidence in response is something I don’t think LLMs are currently capable of, since it would require contextual understanding of both the question, and the answer being given.

                            D merc@sh.itjust.worksM 2 Replies Last reply
                            4
                            • irelephant@lemm.eeI [email protected]

                              Ai code is specifically annoying because it looks like it would work, but its just plausible bullshit.

                              captain_aggravated@sh.itjust.worksC This user is from outside of this forum
                              captain_aggravated@sh.itjust.worksC This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #141

                              Well I've got the name for my autobiography now.

                              runeko@programming.devR irelephant@lemm.eeI 2 Replies Last reply
                              11
                              • codiunicorn@programming.devC [email protected]
                                This post did not contain any content.
                                softestsapphic@lemmy.worldS This user is from outside of this forum
                                softestsapphic@lemmy.worldS This user is from outside of this forum
                                [email protected]
                                wrote on last edited by [email protected]
                                #142

                                Watching the serious people trying to use AI to code gives me the same feeling as the cybertruck people exploring the limits of their car. XD

                                "It's terrible and I should hate it, but gosh it it isn't just so cool"

                                I wish i could get so excited over disappointing garbage

                                P L 2 Replies Last reply
                                32
                                • kairubyte@lemmy.dbzer0.comK [email protected]

                                  They aren’t explicitly trained to sound confident, that’s just how users tend to talk. You don’t often see “I don’t know but you can give this a shot” on Stack Overflow, for instance. Even the incorrect answers coming from users are presented confidently.

                                  Funnily enough, lack of confidence in response is something I don’t think LLMs are currently capable of, since it would require contextual understanding of both the question, and the answer being given.

                                  D This user is from outside of this forum
                                  D This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #143

                                  SO answers and questions are usually edited multiple times to sound professional, confident, and be correct.

                                  1 Reply Last reply
                                  1
                                  • X [email protected]

                                    All programs can be written with on less line of code.
                                    All programs have at least one bug.

                                    By the logical consequences of these axioms every program can be reduced to one line of code - that doesn't work.

                                    One day AI will get there.

                                    gmtom@lemmy.worldG This user is from outside of this forum
                                    gmtom@lemmy.worldG This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #144

                                    All programs can be written with on less line of code.
                                    All programs have at least one bug.

                                    The humble "Hello world" would like a word.

                                    P A 2 Replies Last reply
                                    7
                                    • spankmonkey@lemmy.worldS [email protected]

                                      Trying to treat the discussion as a philisophical one is giving more nuance to 'knowing' than it deserves. An LLM can spit out a sentence that looks like it knows something, but it is just pattern matching frequency of word associations which is mimicry, not knowledge.

                                      I This user is from outside of this forum
                                      I This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by [email protected]
                                      #145

                                      I'll preface by saying I agree that AI doesn't really "know" anything and is just a randomised Chinese Room. However...

                                      Acting like the entire history of the philosophy of knowledge is just some attempt make "knowing" seem more nuanced is extremely arrogant. The question of what knowledge is is not just relevant to the discussion of AI, but is fundamental in understanding how our own minds work. When you form arguments about how AI doesn't know things, you're basing it purely on the human experience of knowing things. But that calls into question how you can be sure you even know anything at all. We can't just take it for granted that our perceptions are a perfect example of knowledge, we have to interrogate that and see what it is that we can do that AIs can't- or worse, discover that our assumptions about knowledge, and perhaps even of our own abilities, are flawed.

                                      spankmonkey@lemmy.worldS 1 Reply Last reply
                                      0
                                      • kairubyte@lemmy.dbzer0.comK [email protected]

                                        They aren’t explicitly trained to sound confident, that’s just how users tend to talk. You don’t often see “I don’t know but you can give this a shot” on Stack Overflow, for instance. Even the incorrect answers coming from users are presented confidently.

                                        Funnily enough, lack of confidence in response is something I don’t think LLMs are currently capable of, since it would require contextual understanding of both the question, and the answer being given.

                                        merc@sh.itjust.worksM This user is from outside of this forum
                                        merc@sh.itjust.worksM This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #146

                                        No, I'm sure you're wrong. There's a certain cheerful confidence that you get from every LLM response. It's this upbeat "can do attitude" brimming with confidence mixed with subservience that is definitely not the standard way people communicate on the Internet, let alone Stack Overflow. Sure, sometimes people answering questions are overconfident, but it's often an arrogant kind of confidence, not a subservient kind of confidence you get from LLMs.

                                        I don't think an LLM can sound like it lacks in confidence for the right reasons, but it can definitely pull off lack of confidence if it's prompted correctly. To actually lack confidence it would have to have an understanding of the situation. But, to imitate lack of confidence all it would need to do is draw on all the training data it has where the response to a question is one where someone lacks confidence.

                                        Similarly, it's not like it actually has confidence normally. It's just been trained / meta-prompted to emit an answer in a style that mimics confidence.

                                        L 1 Reply Last reply
                                        2
                                        • W [email protected]

                                          I can't speak for Lemmy but I'm personally not against LLMs and also use them on a regular basis. As Pennomi said (and I totally agree with that) LLMs are a tool and we should use that tool for things it's good for. But "thinking" is not one of the things LLMs are good at. And software engineering requires a ton of thinking. Of course there are things (boilerplate, etc.) where no real thinking is required, but non-AI tools like code completion/intellisense, macros, code snippets/templates can help with that and never was I bottle-necked by my typing speed when writing software.

                                          It was always the time I needed to plan the structure of the software, design good and correct abstractions and the overall architecture. Exactly the things LLMs can't do.

                                          Copilot even fails to stick to coding style from the same file, just because it saw a different style more often during training.

                                          Z This user is from outside of this forum
                                          Z This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #147

                                          "I'm not again LLMs I just never say anything useful about them and constantly point out how I can't use them." The other guy is right and you just prove his point.

                                          W 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups