Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Programmer Humor
  3. the beautiful code

the beautiful code

Scheduled Pinned Locked Moved Programmer Humor
programmerhumor
226 Posts 135 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • S [email protected]

    I have nothing to prove to you if you wish to keep doing everything by hand that's fine.

    But there are plenty of engineers l3 and beyond including myself using this to lighten their workload daily and acting like that isn't the case is just arguing in bad faith or you don't work in the industry.

    B This user is from outside of this forum
    B This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #210

    I do use it, it's handy for some sloppy css for example. Emphasis on sloppy. I was kinda hoping you actually had something there

    1 Reply Last reply
    0
    • T [email protected]

      The cursed Will Smith eating spaghetti wasn't the best video AI model available at the time, just what was available for consumers to run on their own hardware at the time. So while the rate of improvement in AI image/video generation is incredible, it's not quite as incredible as that viral video would suggest

      W This user is from outside of this forum
      W This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #211

      But wouldn't you point still be true today that the best AI video models today would be the onces that are not available for consumers?

      T 1 Reply Last reply
      0
      • W [email protected]

        But wouldn't you point still be true today that the best AI video models today would be the onces that are not available for consumers?

        T This user is from outside of this forum
        T This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #212

        Probably is still true, but I've not been paying close attention to the AI market in the last couple of years. But the point I was trying to make was that it's an apples to oranges comparison

        1 Reply Last reply
        0
        • borari@lemmy.dbzer0.comB [email protected]

          It’s true that one is based on continuous floats and the other is dynamic peaks

          Can you please explain what you’re trying to say here?

          C This user is from outside of this forum
          C This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #213

          Both have neurons with synapses linking them to other neurons. In the artificial case, synapse activation can be any floating point number, and outgoing synapses are calculated from incoming synapses all at once (there's no notion of time, it's not dynamic). Biological neurons are binary, they either fire or do not fire, during a firing cycle they ramp up to a peak potential and then drop down in a predictable fashion. But, it's dynamic; they can peak at any time and downstream neurons can begin to fire "early".

          They do seem to be equivalent in some way, although AFAIK it's unclear how at this point, and the exact activation function of each brain neuron is a bit mysterious.

          borari@lemmy.dbzer0.comB 1 Reply Last reply
          0
          • J [email protected]

            If you would like to link some abstracts you find in a DuckDuckGo search that’s fine.

            C This user is from outside of this forum
            C This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #214

            I actually was going to link the same one I always do, which I think I heard about through a blog or talk. If that's not good enough, it's easy to devise your own test and put it to an LLM. The way you phrased that makes it sound like you're more interested in ignoring any empirical evidence, though.

            J 1 Reply Last reply
            0
            • C [email protected]

              Both have neurons with synapses linking them to other neurons. In the artificial case, synapse activation can be any floating point number, and outgoing synapses are calculated from incoming synapses all at once (there's no notion of time, it's not dynamic). Biological neurons are binary, they either fire or do not fire, during a firing cycle they ramp up to a peak potential and then drop down in a predictable fashion. But, it's dynamic; they can peak at any time and downstream neurons can begin to fire "early".

              They do seem to be equivalent in some way, although AFAIK it's unclear how at this point, and the exact activation function of each brain neuron is a bit mysterious.

              borari@lemmy.dbzer0.comB This user is from outside of this forum
              borari@lemmy.dbzer0.comB This user is from outside of this forum
              [email protected]
              wrote last edited by
              #215

              Ok, thanks for that clarification. I guess I’m a bit confused as to why a comparison is being drawn between neurons in a neural network and neurons in a biological brain though.

              In a neural network, the neuron receives an input, performs a mathematical formula, and returns an output right?

              Like you said we have no understanding of what exactly a neuron in the brain is actually doing when it’s fired, and that’s not considering the chemical component of the brain.

              I understand why terminology was reused when experts were designing an architecture that was meant to replicate the architecture of the brain. Unfortunately, I feel like that reuse of terminology is making it harder for laypeople to understand what a neural network is and what it is not now that those networks are a part of the zeitgeist thanks to the explosion of LLM’s and stuff.

              C 1 Reply Last reply
              0
              • W [email protected]

                I don't see how that follows because I did point out in another comment that they are very useful if used like search engines or interactive stack overflow or Wikipedia.

                LLMs are extremely knowledgeable (as in they "know" a lot) but are completely dumb.

                If you want to anthropomorphise it, current LLMs are like a person that read the entire internet, remembered a lot of it, but still is too stupid to win/draw tic tac toe.

                So there is value in LLMs, if you use them for their knowledge.

                Z This user is from outside of this forum
                Z This user is from outside of this forum
                [email protected]
                wrote last edited by
                #216

                You say they have no knowledge and are only good for boilerplate. So you're contradicting yourself there.

                W 1 Reply Last reply
                0
                • G [email protected]

                  thank you for your input obvious troll account.

                  Z This user is from outside of this forum
                  Z This user is from outside of this forum
                  [email protected]
                  wrote last edited by
                  #217

                  Ahh got nothing but lack of understanding and insults. Typical.

                  1 Reply Last reply
                  0
                  • L [email protected]

                    On one line of code you say?

                    *search & replaces all line breaks with spaces*

                    S This user is from outside of this forum
                    S This user is from outside of this forum
                    [email protected]
                    wrote last edited by
                    #218

                    Fired for not writing the quota number of lines even junior devs manage to hit.

                    1 Reply Last reply
                    0
                    • borari@lemmy.dbzer0.comB [email protected]

                      Ok, thanks for that clarification. I guess I’m a bit confused as to why a comparison is being drawn between neurons in a neural network and neurons in a biological brain though.

                      In a neural network, the neuron receives an input, performs a mathematical formula, and returns an output right?

                      Like you said we have no understanding of what exactly a neuron in the brain is actually doing when it’s fired, and that’s not considering the chemical component of the brain.

                      I understand why terminology was reused when experts were designing an architecture that was meant to replicate the architecture of the brain. Unfortunately, I feel like that reuse of terminology is making it harder for laypeople to understand what a neural network is and what it is not now that those networks are a part of the zeitgeist thanks to the explosion of LLM’s and stuff.

                      C This user is from outside of this forum
                      C This user is from outside of this forum
                      [email protected]
                      wrote last edited by [email protected]
                      #219

                      Agreed. The started out trying to make artificial nerves, but then made something totally different. The fact we see the same biases and failure mechanisms emerging in them, now that we're measuring them at scale, is actually a huge surprise. It probably says something deep and fundamental about the geometry of randomly chosen high-dimensional function spaces, regardless of how they're implemented.

                      Like you said we have no understanding of what exactly a neuron in the brain is actually doing when it’s fired, and that’s not considering the chemical component of the brain.

                      I wouldn't say none. What the axons, dendrites and synapses are doing is very well understood down to the molecular level - so that's the input and output part. I'm aware knowledge of the biological equivalents of the other stuff (ReLU function and backpropagation) is incomplete. I do assume some things are clear even there, although you'd have to ask a neurologist for details.

                      1 Reply Last reply
                      0
                      • B [email protected]

                        It can become pretty bad quickly, with just a small project with only 15-20 files. I've been using cursor IDE, building out flow charts & tests manually, and just seeing where it goes.

                        And while incredibly impressive how it's creating all the steps, it then goes into chaos mode where it will start ignoring all the rules. It'll start changing tests, start pulling in random libraries, not at all thinking holistically about how everything fits together.

                        Then you try to reel it in, and it continues to go rampant. And for me, that's when I either take the wheel or roll back.

                        I highly recommend every programmer watch it in action.

                        C This user is from outside of this forum
                        C This user is from outside of this forum
                        [email protected]
                        wrote last edited by [email protected]
                        #220

                        Is there a chance that's right around the time the code no longer fits into the LLMs input window of tokens? The basic technology doesn't actually have a long term memory of any kind (at least outside of the training phase).

                        maggiwuerze@feddit.orgM 1 Reply Last reply
                        1
                        • Z [email protected]

                          You say they have no knowledge and are only good for boilerplate. So you're contradicting yourself there.

                          W This user is from outside of this forum
                          W This user is from outside of this forum
                          [email protected]
                          wrote last edited by
                          #221

                          I didn't say they have no knowledge, quite the opposite. Here a quote from the comment you answered:

                          LLMs are extremely knowledgeable (as in they "know" a lot) but are completely dumb.

                          There is a subtle difference between intelligent and knowledgeable. LLM know a lot in that sense that they can remember a lot of things, but they are dumb in that sense that they are completely unable to draw conclusions and put that knowledge into action in any other means besides spitting out again what they once learned.

                          That's why LLMs can tell you a lot about about all different kinds of game theory about tic tac toe but can't draw/win that game consistently.

                          So knowing a lot and still being dumb is not a contradiction.

                          1 Reply Last reply
                          0
                          • C [email protected]

                            Is there a chance that's right around the time the code no longer fits into the LLMs input window of tokens? The basic technology doesn't actually have a long term memory of any kind (at least outside of the training phase).

                            maggiwuerze@feddit.orgM This user is from outside of this forum
                            maggiwuerze@feddit.orgM This user is from outside of this forum
                            [email protected]
                            wrote last edited by
                            #222

                            Was my first thought as well. These things really need to find a way to store a larger context without ballooning past the vram limit

                            C 1 Reply Last reply
                            0
                            • maggiwuerze@feddit.orgM [email protected]

                              Was my first thought as well. These things really need to find a way to store a larger context without ballooning past the vram limit

                              C This user is from outside of this forum
                              C This user is from outside of this forum
                              [email protected]
                              wrote last edited by
                              #223

                              The thing being, it's kind of an inflexible blackbox technology, and that's easier said than done. In one fell swoop we've gotten all that soft, fuzzy common sense stuff that people were chasing for decades inside a computer, but it's ironically still beyond our reach to fully use.

                              From here, I either expect that steady progress will be made in finding more clever and constrained ways of using the raw neural net output, or we're back to an AI winter. I suppose it's possible a new architecture and/or training scheme will come along, but it doesn't seem imminent.

                              maggiwuerze@feddit.orgM 1 Reply Last reply
                              0
                              • C [email protected]

                                The thing being, it's kind of an inflexible blackbox technology, and that's easier said than done. In one fell swoop we've gotten all that soft, fuzzy common sense stuff that people were chasing for decades inside a computer, but it's ironically still beyond our reach to fully use.

                                From here, I either expect that steady progress will be made in finding more clever and constrained ways of using the raw neural net output, or we're back to an AI winter. I suppose it's possible a new architecture and/or training scheme will come along, but it doesn't seem imminent.

                                maggiwuerze@feddit.orgM This user is from outside of this forum
                                maggiwuerze@feddit.orgM This user is from outside of this forum
                                [email protected]
                                wrote last edited by
                                #224

                                I fell like the way investments are currently made, coming up with something new is made almost impossible. Most of the hardware is designed with LLMs in mind

                                1 Reply Last reply
                                0
                                • C [email protected]

                                  I actually was going to link the same one I always do, which I think I heard about through a blog or talk. If that's not good enough, it's easy to devise your own test and put it to an LLM. The way you phrased that makes it sound like you're more interested in ignoring any empirical evidence, though.

                                  J This user is from outside of this forum
                                  J This user is from outside of this forum
                                  [email protected]
                                  wrote last edited by
                                  #225

                                  That’s unreal. No, you cannot come up with your own scientific test to determine a language model’s capacity for understanding. You don’t even have access to the “thinking” side of the LLM.

                                  C 1 Reply Last reply
                                  0
                                  • J [email protected]

                                    That’s unreal. No, you cannot come up with your own scientific test to determine a language model’s capacity for understanding. You don’t even have access to the “thinking” side of the LLM.

                                    C This user is from outside of this forum
                                    C This user is from outside of this forum
                                    [email protected]
                                    wrote last edited by [email protected]
                                    #226

                                    You can devise a task it couldn't have seen in the training data, I mean. Building a comprehensive argument out of them requires a lot more work and time.

                                    You don’t even have access to the “thinking” side of the LLM.

                                    Obviously, that goes for the natural intelligences too, so it's not really a fair thing to require.

                                    1 Reply Last reply
                                    0
                                    Reply
                                    • Reply as topic
                                    Log in to reply
                                    • Oldest to Newest
                                    • Newest to Oldest
                                    • Most Votes


                                    • Login

                                    • Login or register to search.
                                    • First post
                                      Last post
                                    0
                                    • Categories
                                    • Recent
                                    • Tags
                                    • Popular
                                    • World
                                    • Users
                                    • Groups