Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Programmer Humor
  3. the beautiful code

the beautiful code

Scheduled Pinned Locked Moved Programmer Humor
programmerhumor
226 Posts 135 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • P [email protected]

    Why would I trust a drill press when it can’t even cut a board in half?

    W This user is from outside of this forum
    W This user is from outside of this forum
    [email protected]
    wrote on last edited by [email protected]
    #83

    A drill press (or the inventors) don't claim that it can do that, but with LLMs they claim to replace humans on a lot of thinking tasks. They even brag with test benchmarks, claim Bachelor, Master and Phd level intelligence, call them "reasoning" models, but still fail to beat my niece in tic tac toe, which by the way doesn't have a PhD in anything 🤣

    LLMs are typically good in things that happened a lot during training. If you are writing software there certainly are things which the LLM saw a lot of during training. But this actually is the biggest problem, it will happily generate code that might look ok, even during PR review but might blow up in your face a few weeks later.

    If they can't handle things they even saw during training (but sparsely, like tic tac toe) it wouldn't be able to produce code you should use in production. I wouldn't trust any junior dev that doesn't set their O right next to the two Xs.

    P 1 Reply Last reply
    8
    • P [email protected]

      Why would I trust a drill press when it can’t even cut a board in half?

      D This user is from outside of this forum
      D This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #84

      It’s futile even trying to highlight the things LLMs do very well as Lemmy is incredibly biased against them.

      W 1 Reply Last reply
      4
      • M [email protected]

        The lead dev is not available this summer to review, but you can review here: https://github.com/edzdez/sway-easyfocus/pull/22

        It's not great that four changes are rolled into a single PR, but that's my issue not Claude's because they were related and I wanted to test them all at once.

        C This user is from outside of this forum
        C This user is from outside of this forum
        [email protected]
        wrote on last edited by [email protected]
        #85

        This is interesting, I would be quite impressed if this PR got merged without additional changes.

        I am genuinely curious and no judgement at all, since you mentioned that you are not a rust/GTK expert, are you able to read and and have a decent understanding of the output code?

        For example, in the sway.rs file, you uncommented a piece of code about floating nodes in get_all_windows function, do you know why it is uncommented? (again, not trying to judge; it is a genuine question. I also don't know rust or GTK, just curious.

        M 1 Reply Last reply
        2
        • alk@sh.itjust.worksA [email protected]

          Code that works is also just text.

          W This user is from outside of this forum
          W This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #86

          Text that's not code might also work.

          1 Reply Last reply
          1
          • C [email protected]

            Then I am quite confused what LLM is supposed to help me with. I am not a programmer, and I am certainly not a TypeScript programmer. This is why I postponed my eslint upgrade for half a year, since I don't have a lot of experience in TypeScript, besides one project in my college webdev class.

            So if I can sit down for a couple hour to port my rather simple eslint config, which arguably is the most mechanical task I have seen in my limited programming experience, and LLM produce anything close to correct. Then I am rather confused what "real programmers" would use it for...

            People here say boilerplate code, but honestly I don't quite recall the last time I need to write a lot of boilerplate code.

            I have also tried to use llm to debug SELinux and docker container on my homelab; unfortunately, it is absolutely useless in that as well.

            trickdacy@lemmy.worldT This user is from outside of this forum
            trickdacy@lemmy.worldT This user is from outside of this forum
            [email protected]
            wrote on last edited by [email protected]
            #87

            With all due respect, how can you weigh in on programming so confidently when you admit to not being a programmer?

            People tend to despise or evangelize LLMs. To me, github copilot has a decent amount of utility. I only use the auto-complete feature which does things like save me from typing 2-5 predictable lines of code that devs tend to type all the time. Instead of typing it all, I press tab. It's just a time saver. I have never used it like "write me a script or a function that does x" like some people do. I am not interested in that as it seems like a sad crutch that I'd need to customize so much anyway that I may as well skip that step.

            Having said that, I'm noticing the copilot autocomplete seems to be getting worst over time. I'm not sure why it worsening, but if it ever feels not worth it anymore I'll drop it, no harm no foul. The binary thinkers tend to think you're either a good dev who despises all forms of AI or you're an idiot who tries to have a robot write all your code for you. As a dev for the past 20 years, I see no reason to choose between those two opposites. It can be useful in some contexts.

            PS. did you try the eslint 8 -> 9 migration tool? If your config was simple enough for it, it likely would've done all or almost all the work for you... It fully didn't work for me. I had to resolve several errors, because I tend to add several custom plugins, presets, and rules that differ across projects.

            C 1 Reply Last reply
            3
            • codiunicorn@programming.devC [email protected]
              This post did not contain any content.
              L This user is from outside of this forum
              L This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #88

              I’ve heard that a Claude 4 model generating code for an infinite amount of time will eventually simulate a monkey typing out Shakespeare

              M 1 Reply Last reply
              26
              • codiunicorn@programming.devC [email protected]
                This post did not contain any content.
                drunkanroot@sh.itjust.worksD This user is from outside of this forum
                drunkanroot@sh.itjust.worksD This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #89

                cant wait to see "we use AI agents to generate well structured non-functioning code" with off centered everything and non working embeds on the website

                1 Reply Last reply
                8
                • O [email protected]

                  well, it only took 2 years to go from the cursed will smith eating spaghetti video to veo3 which can make completely lifelike videos with audio. so who knows what the future holds

                  W This user is from outside of this forum
                  W This user is from outside of this forum
                  [email protected]
                  wrote on last edited by [email protected]
                  #90

                  There actually isn't really any doubt that AI (especially AGI) will surpass humans on all thinking tasks unless we have a mass extinction event first. But current LLMs are nowhere close to actual human intelligence.

                  1 Reply Last reply
                  1
                  • W [email protected]

                    A drill press (or the inventors) don't claim that it can do that, but with LLMs they claim to replace humans on a lot of thinking tasks. They even brag with test benchmarks, claim Bachelor, Master and Phd level intelligence, call them "reasoning" models, but still fail to beat my niece in tic tac toe, which by the way doesn't have a PhD in anything 🤣

                    LLMs are typically good in things that happened a lot during training. If you are writing software there certainly are things which the LLM saw a lot of during training. But this actually is the biggest problem, it will happily generate code that might look ok, even during PR review but might blow up in your face a few weeks later.

                    If they can't handle things they even saw during training (but sparsely, like tic tac toe) it wouldn't be able to produce code you should use in production. I wouldn't trust any junior dev that doesn't set their O right next to the two Xs.

                    P This user is from outside of this forum
                    P This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #91

                    Sure, the marketing of LLMs is wildly overstated. I would never argue otherwise. This is entirely a red herring, however.

                    I’m saying you should use the tools for what they’re good at, and don’t use them for what they’re bad at. I don’t see why this is controversial at all. You can personally decide that they are good for nothing. Great! Nobody is forcing you to use AI in your work. (Though if they are, you should find a new employer.)

                    W 1 Reply Last reply
                    1
                    • D [email protected]

                      It’s futile even trying to highlight the things LLMs do very well as Lemmy is incredibly biased against them.

                      W This user is from outside of this forum
                      W This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #92

                      I can't speak for Lemmy but I'm personally not against LLMs and also use them on a regular basis. As Pennomi said (and I totally agree with that) LLMs are a tool and we should use that tool for things it's good for. But "thinking" is not one of the things LLMs are good at. And software engineering requires a ton of thinking. Of course there are things (boilerplate, etc.) where no real thinking is required, but non-AI tools like code completion/intellisense, macros, code snippets/templates can help with that and never was I bottle-necked by my typing speed when writing software.

                      It was always the time I needed to plan the structure of the software, design good and correct abstractions and the overall architecture. Exactly the things LLMs can't do.

                      Copilot even fails to stick to coding style from the same file, just because it saw a different style more often during training.

                      Z 1 Reply Last reply
                      1
                      • codiunicorn@programming.devC [email protected]
                        This post did not contain any content.
                        Z This user is from outside of this forum
                        Z This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #93

                        Ctrl+A + Del.

                        So clean.

                        1 Reply Last reply
                        16
                        • P [email protected]

                          Sure, the marketing of LLMs is wildly overstated. I would never argue otherwise. This is entirely a red herring, however.

                          I’m saying you should use the tools for what they’re good at, and don’t use them for what they’re bad at. I don’t see why this is controversial at all. You can personally decide that they are good for nothing. Great! Nobody is forcing you to use AI in your work. (Though if they are, you should find a new employer.)

                          W This user is from outside of this forum
                          W This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #94

                          Totally agree with that and I don't think anybody would see that as controversial. LLMs are actually good in a lot of things, but not thinking and typically not if you are an expert. That's why LLMs know more about the anatomy of humans than I do, but probably not more than most people with a medical degree.

                          1 Reply Last reply
                          0
                          • P [email protected]

                            Well yeah, it’s working from an incomplete knowledge of the code base. If you asked a human to do the same they would struggle.

                            LLMs work only if they can fit the whole context into their memory, and that means working only in highly limited environments.

                            M This user is from outside of this forum
                            M This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #95

                            No, a human would just find an API that is publically available. And the fact that it knew the static class "Misc" means it knows the api. It just halucinated and responded with bullcrap. The entire concept can be summarized with "I want to color a player's model in GAME using python and SCRIPTING ENGINE".

                            1 Reply Last reply
                            3
                            • S [email protected]

                              4o has been able to do this for months.

                              B This user is from outside of this forum
                              B This user is from outside of this forum
                              [email protected]
                              wrote on last edited by [email protected]
                              #96

                              I tried, it can't get trough four lines without messing up. Unless I give it tasks that are so stupendously simple that I'm faster typing them myself while watching tv

                              S 1 Reply Last reply
                              3
                              • P [email protected]

                                Uh yeah, like all the time. Anyone who says otherwise really hasn’t tried recently. I know it’s a meme that AI can’t code (and still in many cases that’s true, eg. I don’t have the AI do anything with OpenCV or complex math) but it’s very routine these days for common use cases like web development.

                                B This user is from outside of this forum
                                B This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #97

                                You must be a big fan of boilerplate

                                P 1 Reply Last reply
                                4
                                • P [email protected]

                                  To be fair, if I wrote 3000 new lines of code in one shot, it probably wouldn’t run either.

                                  LLMs are good for simple bits of logic under around 200 lines of code, or things that are strictly boilerplate. People who are trying to force it to do things beyond that are just being silly.

                                  O This user is from outside of this forum
                                  O This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #98

                                  Perhaps 5 LOC. Maybe 3. And even then I'll analyze every single character in wrote. And then I will in fact find bugs. Most often it hallucinates some functions that would be fantastic to use - if they existed.

                                  B 1 Reply Last reply
                                  4
                                  • alk@sh.itjust.worksA [email protected]

                                    Code that works is also just text.

                                    mubelotix@jlai.luM This user is from outside of this forum
                                    mubelotix@jlai.luM This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #99

                                    It is text, but not just text

                                    1 Reply Last reply
                                    3
                                    • D [email protected]

                                      I use ChatGPT for Go programming all the time and it rarely has problems, I think Go is more niche than Kotlin

                                      O This user is from outside of this forum
                                      O This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #100

                                      I get a bit frustrated at it trying to replicate everyone else's code in my code base. Once my project became large enough, I felt it necessary to implement my own error handling instead of go's standard, which was not sufficient for me anymore. Copilot will respect that for a while, until I switch to a different file. At that point it will try to force standard go errors everywhere.

                                      D 1 Reply Last reply
                                      0
                                      • codiunicorn@programming.devC [email protected]
                                        This post did not contain any content.
                                        1984@lemmy.today1 This user is from outside of this forum
                                        1984@lemmy.today1 This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by [email protected]
                                        #101

                                        Its like having a junior developer with a world of confidence just change shit and spend hours breaking things and trying to fix them, while we pay big tech for the privilege of watching the chaos.

                                        I asked chat gpt to give me a simple squid proxy config today that blocks everything except https. It confidently gave me one but of course it didnt work. It let through http and despite many attempts to get a working config that did that, it just failed.

                                        So yeah in the end i have to learn squid syntax anyway, which i guess is fine, but I spent hours trying to get a working config because we pay for chat gpt to do exactly that....

                                        F merc@sh.itjust.worksM N 3 Replies Last reply
                                        16
                                        • B [email protected]

                                          You must be a big fan of boilerplate

                                          P This user is from outside of this forum
                                          P This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #102

                                          Not sure what you mean, boilerplate code is one of the things AI is good at.

                                          Take a straightforward Django project for example. Given a models.py file, AI can easily write the corresponding admin file, or a RESTful API file. That’s generally just tedious boilerplate work that requires no decision making - perfect for an AI.

                                          More than that and you are probably babysitting the AI so hard that it is faster to just write it yourself.

                                          1 Reply Last reply
                                          3
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups