Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Programmer Humor
  3. the beautiful code

the beautiful code

Scheduled Pinned Locked Moved Programmer Humor
programmerhumor
226 Posts 135 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • C [email protected]

    This is just your ego talking. You can't stand the idea that a computer could be better than you at something you devoted your life to. You're not special. Coding is not special. It happened to artists, chess players, etc. It'll happen to us too.

    I'll listen to experts who study the topic over an internet rando. AI model capabilities as yet show no signs of slowing their exponential growth.

    W This user is from outside of this forum
    W This user is from outside of this forum
    [email protected]
    wrote on last edited by [email protected]
    #153

    Coding isn't special you are right, but it's a thinking task and LLMs (including reasoning models) don't know how to think. LLMs are knowledgeable because they remembered a lot of the data and patterns of the training data, but they didn't learn to think from that. That's why LLMs can't replace humans.

    That does certainly not mean that software can't be smarter than humans. It will and it's just a matter of time, but to get there we likely have AGI first.

    To show you that LLMs can't think, try to play ASCII tic tac toe (XXO) against all those models. They are completely dumb even though it "saw" the entire Wikipedia article on how xxo works during training, that it's a solved game, different strategies and how to consistently draw - but still it can't do it. It loses most games against my four year old niece and she doesn't even play good/perfect xxo.

    I wouldn't trust anything, which is claimed to do thinking tasks, that can't even beat my niece in xxo, with writing firmware for cars or airplanes.

    LLMs are great if used like search engines or interactive versions of Wikipedia/Stack overflow. But they certainly can't think. For now, but likely we'll need different architectures for real thinking models than LLMs have.

    1 Reply Last reply
    1
    • irelephant@lemm.eeI [email protected]

      Ai code is specifically annoying because it looks like it would work, but its just plausible bullshit.

      K This user is from outside of this forum
      K This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #154

      And that's what happens when you spend a trillion dollars on an autocomplete: amazing at making things look like whatever it's imitating, but with zero understanding of why the original looked that way.

      C 1 Reply Last reply
      21
      • merc@sh.itjust.worksM [email protected]

        No, I'm sure you're wrong. There's a certain cheerful confidence that you get from every LLM response. It's this upbeat "can do attitude" brimming with confidence mixed with subservience that is definitely not the standard way people communicate on the Internet, let alone Stack Overflow. Sure, sometimes people answering questions are overconfident, but it's often an arrogant kind of confidence, not a subservient kind of confidence you get from LLMs.

        I don't think an LLM can sound like it lacks in confidence for the right reasons, but it can definitely pull off lack of confidence if it's prompted correctly. To actually lack confidence it would have to have an understanding of the situation. But, to imitate lack of confidence all it would need to do is draw on all the training data it has where the response to a question is one where someone lacks confidence.

        Similarly, it's not like it actually has confidence normally. It's just been trained / meta-prompted to emit an answer in a style that mimics confidence.

        L This user is from outside of this forum
        L This user is from outside of this forum
        [email protected]
        wrote on last edited by [email protected]
        #155

        ChatGPT went through a phase of overly bubbly upbeat responses, they chilled it out tho. Not sure if that’s what you saw.

        One thing is for sure with all of them, they never say “I don’t know” because such responses aren’t likely to be found in any training data!

        It’s probably part of some system level prompt guidance too, like you say, to be confident.

        merc@sh.itjust.worksM 1 Reply Last reply
        0
        • X [email protected]

          All programs can be written with on less line of code.
          All programs have at least one bug.

          By the logical consequences of these axioms every program can be reduced to one line of code - that doesn't work.

          One day AI will get there.

          L This user is from outside of this forum
          L This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #156

          On one line of code you say?

          *search & replaces all line breaks with spaces*

          S 1 Reply Last reply
          3
          • gmtom@lemmy.worldG [email protected]

            All programs can be written with on less line of code.
            All programs have at least one bug.

            The humble "Hello world" would like a word.

            A This user is from outside of this forum
            A This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #157

            Just to boast my old timer credentials.

            There is an utility program in IBM’s mainframe operating system, z/OS, that has been there since the 60s.

            It has just one assembly code instruction: a BR 14, which means basically ‘return’.

            The first version was bugged and IBM had to issue a PTF (patch) to fix it.

            D umbraroze@slrpnk.netU 2 Replies Last reply
            1
            • L [email protected]

              ChatGPT went through a phase of overly bubbly upbeat responses, they chilled it out tho. Not sure if that’s what you saw.

              One thing is for sure with all of them, they never say “I don’t know” because such responses aren’t likely to be found in any training data!

              It’s probably part of some system level prompt guidance too, like you say, to be confident.

              merc@sh.itjust.worksM This user is from outside of this forum
              merc@sh.itjust.worksM This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #158

              I think "I don't know" might sometimes be found in the training data. But, I'm sure they optimize the meta-prompts so that it never shows up in a response to people. While it might be the "honest" answer a lot of the time, the makers of these LLMs seem to believe that people would prefer confident bullshit that's wrong over "I don't know".

              1 Reply Last reply
              1
              • O [email protected]

                I get a bit frustrated at it trying to replicate everyone else's code in my code base. Once my project became large enough, I felt it necessary to implement my own error handling instead of go's standard, which was not sufficient for me anymore. Copilot will respect that for a while, until I switch to a different file. At that point it will try to force standard go errors everywhere.

                D This user is from outside of this forum
                D This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #159

                Yes, you can't use Copilot to generate files in your code structure way if you start from scratch. I usually start by coding a skaffold and then use Copilot to complete the rest, which works quite good most of the time. Another possibility is to create comment templates that will give instructions to Copilot. So every new Go file starts with coding structure comments and Copilot will respect that. Junior Devs might also respect that, but I am not so sure about them

                1 Reply Last reply
                0
                • Z [email protected]

                  LMFAO. He's right about your ego.

                  G This user is from outside of this forum
                  G This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #160

                  thank you for your input obvious troll account.

                  Z 1 Reply Last reply
                  0
                  • P [email protected]

                    To be fair, if I wrote 3000 new lines of code in one shot, it probably wouldn’t run either.

                    LLMs are good for simple bits of logic under around 200 lines of code, or things that are strictly boilerplate. People who are trying to force it to do things beyond that are just being silly.

                    iavicenna@lemmy.worldI This user is from outside of this forum
                    iavicenna@lemmy.worldI This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #161

                    I am on you with this one. It is also very helpful in argument heavy libraries like plotly. If I ask a simple question like "in plotly how do I do this and that to the xaxis" etc it generally gives correct answers, saving me having to do internet research for 5-10 minutes or read documentations for functions with 1000 inputs. I even managed to get it to render a simple scene of cloud of points with some interactivity in 3js after about 30 minutes of back and forth. Not knowing much javascript, that would take me at least a couple hours. So yeah it can be useful as an assistant to someone who already knows coding (so the person can vet and debug the code).

                    Though if you weigh pros and cons of how LLMs are used (tons of fake internet garbage, tons of energy used, very convincing disinformation bots), I am not convinced benefits are worth the damages.

                    S 1 Reply Last reply
                    1
                    • spankmonkey@lemmy.worldS [email protected]

                      Acting like the entire history of the philosophy of knowledge is just some attempt make “knowing” seem more nuanced is extremely arrogant.

                      That is not what I said. In fact, it is the opposite of what I said.

                      I said that treating the discussion of LLMs as a philosophical one is giving 'knowing' in the discussion of LLMs more nuance than it deserves.

                      I This user is from outside of this forum
                      I This user is from outside of this forum
                      [email protected]
                      wrote on last edited by [email protected]
                      #162

                      I never said discussing LLMs was itself philosophical. I said that as soon as you ask the question "but does it really know?" then you are immediately entering the territory of the theory of knowledge, whether you're talking about humans, about dogs, about bees, or, yes, about AI.

                      1 Reply Last reply
                      0
                      • X [email protected]

                        All programs can be written with on less line of code.
                        All programs have at least one bug.

                        By the logical consequences of these axioms every program can be reduced to one line of code - that doesn't work.

                        One day AI will get there.

                        N This user is from outside of this forum
                        N This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #163

                        The ideal code is no code at all

                        R 1 Reply Last reply
                        4
                        • captain_aggravated@sh.itjust.worksC [email protected]

                          Well I've got the name for my autobiography now.

                          runeko@programming.devR This user is from outside of this forum
                          runeko@programming.devR This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #164

                          "Specifically Annoying" or "Plausible Bullshit"? I'd buy the latter.

                          1 Reply Last reply
                          0
                          • M [email protected]

                            This is a philosophical discussion and I doubt you are educated or experienced enough to contribute anything worthwhile to it.

                            I This user is from outside of this forum
                            I This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #165

                            Dude.. the point is I don't have to be. I just have to be human and use it. If it sucks, I am gonna say that.

                            1 Reply Last reply
                            0
                            • K [email protected]

                              So its 50% better than my code?

                              I This user is from outside of this forum
                              I This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #166

                              If the code cannot uphold correctness, it is 0% better than your code.

                              1 Reply Last reply
                              0
                              • 1984@lemmy.today1 [email protected]

                                Its like having a junior developer with a world of confidence just change shit and spend hours breaking things and trying to fix them, while we pay big tech for the privilege of watching the chaos.

                                I asked chat gpt to give me a simple squid proxy config today that blocks everything except https. It confidently gave me one but of course it didnt work. It let through http and despite many attempts to get a working config that did that, it just failed.

                                So yeah in the end i have to learn squid syntax anyway, which i guess is fine, but I spent hours trying to get a working config because we pay for chat gpt to do exactly that....

                                N This user is from outside of this forum
                                N This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #167

                                I have a friend who swears by llms, he sais it helps him a lot. I once watched him do it, and the experience was exactly the same you described. He wasted couple of hours fighting with bullshit generator just to do everything himself anyway. I asked him wouldn't it be better to not waste the time, but he didn't really saw the problem, he gaslit himself that fighting with the idiot machine helped.

                                1 Reply Last reply
                                0
                                • A [email protected]

                                  Just to boast my old timer credentials.

                                  There is an utility program in IBM’s mainframe operating system, z/OS, that has been there since the 60s.

                                  It has just one assembly code instruction: a BR 14, which means basically ‘return’.

                                  The first version was bugged and IBM had to issue a PTF (patch) to fix it.

                                  D This user is from outside of this forum
                                  D This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #168

                                  Okay, you can't just drop that bombshell without elaborating. What sort of bug could exist in a program which contains a single return instruction?!?

                                  A 1 Reply Last reply
                                  3
                                  • codiunicorn@programming.devC [email protected]
                                    This post did not contain any content.
                                    B This user is from outside of this forum
                                    B This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #169

                                    Write tests and run them, reiterate until all tests pass.

                                    C 1 Reply Last reply
                                    7
                                    • S [email protected]

                                      But not just text

                                      Also that's not converse to what the parent comment said

                                      M This user is from outside of this forum
                                      M This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #170

                                      Did you want to converse about conversing?

                                      1 Reply Last reply
                                      1
                                      • A [email protected]

                                        Just to boast my old timer credentials.

                                        There is an utility program in IBM’s mainframe operating system, z/OS, that has been there since the 60s.

                                        It has just one assembly code instruction: a BR 14, which means basically ‘return’.

                                        The first version was bugged and IBM had to issue a PTF (patch) to fix it.

                                        umbraroze@slrpnk.netU This user is from outside of this forum
                                        umbraroze@slrpnk.netU This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #171

                                        Reminds me of how in some old Unix system, /bin/true was a shell script.

                                        ...well, if it needs to just be a program that returns 0, that's a reasonable thing to do. An empty shell script returns 0.

                                        Of course, since this was an old proprietary Unix system, the shell script had a giant header comment that said this is proprietary information and if you disclose this the lawyers will come at ya like a ton of bricks. ...never mind that this was a program that literally does nothing.

                                        1 Reply Last reply
                                        0
                                        • B [email protected]

                                          Write tests and run them, reiterate until all tests pass.

                                          C This user is from outside of this forum
                                          C This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by [email protected]
                                          #172

                                          That doesn't sound viby to me, though. You expect people to actually code? /s

                                          A 1 Reply Last reply
                                          7
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups