Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Programmer Humor
  3. The vibecoders are becoming sentient

The vibecoders are becoming sentient

Scheduled Pinned Locked Moved Programmer Humor
programmerhumor
177 Posts 123 Posters 2 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • C [email protected]

    No idea, but I am not sure your family member is qualified. I would estimate that a coding LLM can code as well as a fresh CS grad. The big advantage that fresh grads have is that after you give them a piece of advice once or twice, they stop making that same mistake.

    R This user is from outside of this forum
    R This user is from outside of this forum
    [email protected]
    wrote last edited by
    #126

    a coding LLM can code as well as a fresh CS grad.

    For a couple of hundred lines of code, they might even be above average. When you split that into a couple of files or start branching out, they usually start to struggle.

    after you give them a piece of advice once or twice, they stop making that same mistake.

    That's a damn good observation. Learning only happens with re-training and that's wayyy cheaper when done in meat.

    1 Reply Last reply
    5
    • O [email protected]

      It makes me so mad that there are CS grads who can't find work at the same time as companies are exploiting the H1B process saying "there aren't enough applicants". When are these companies going to be held accountable?

      R This user is from outside of this forum
      R This user is from outside of this forum
      [email protected]
      wrote last edited by
      #127

      After they fill up on H1B workers and find out that only 1/10 is a good investment.

      H1B development work has been a thing for decades, but there's a reason why there are still high-paying development jobs in the US.

      1 Reply Last reply
      2
      • P [email protected]

        Has he tried being a senior developer? He should really try being a senior developer.

        N This user is from outside of this forum
        N This user is from outside of this forum
        [email protected]
        wrote last edited by
        #128

        He needs at least a decade of industry experience. That helps me find jobs.

        B 1 Reply Last reply
        11
        • O [email protected]

          It makes me so mad that there are CS grads who can't find work at the same time as companies are exploiting the H1B process saying "there aren't enough applicants". When are these companies going to be held accountable?

          S This user is from outside of this forum
          S This user is from outside of this forum
          [email protected]
          wrote last edited by
          #129

          This is in no way new. 20 years ago I used to refer to some job postings as H1Bait because they'd have requirements that were physically impossible (like having 5 years experience with a piece of software <2 years old) specifically so they could claim they couldn't find anyone qualified (because anyone claiming to be qualified was definitely lying) to justify an H1B for which they would be suddenly way less thorough about checking qualifications.

          O L 2 Replies Last reply
          3
          • A [email protected]

            I have never used an AI to code and don't care about being able to do it to the point that I have disabled the buttons that Microsoft crammed into VS Code.

            That said, I do think a better use of AI might be to prepare PRs in logical and reasonable sizes for submission that have coherent contextualization and scope. That way when some dingbat vibe codes their way into a circle jerk that simultaneously crashes from dual memory access and doxxes the entire user base, finding issues is easier to spread out and easier to educate them on why vibe coding is boneheaded.

            I developed for the VFX industry and I see the whole vibe coding thing as akin to storyboards or previs. Those are fast and (often) sloppy representations of the final production which can be used to quickly communicate a concept without massive investment. I see the similarities in this, a vibe code job is sloppy, sometimes incomprehensible, but the finished product could give someone who knew what the fuck they are doing a springboard to write it correctly. So do what the film industry does: keep your previs guys in the basement, feed them occasionally, and tell them to go home when the real work starts. (No shade to previs/SB artists, it is a real craft and vital for the film industry as a whole. I am being flippant about you for commedic effect. Love you guys.)

            merc@sh.itjust.worksM This user is from outside of this forum
            merc@sh.itjust.worksM This user is from outside of this forum
            [email protected]
            wrote last edited by
            #130

            I think storyboards is a great example of how it could be used properly.

            Storyboards are a great way for someone to communicate "this is how I want it to look" in a rough way. But, a storyboard will never show up in the final movie (except maybe fun clips during the credits or something). It's something that helps you on your way, but along the way 100% of it is replaced.

            Similarly, the way I think of generative AI is that it's basically a really good props department.

            In the past, if a props / graphics / FX department had to generate some text on a computer screen that looked like someone was Hacking the Planet they'd need to come up with something that looked completely realistic. But, it would either be something hand-crafted, or they'd just go grab some open-source file and spew it out on the screen. What generative AI does is that it digests vast amounts of data to be able to come up with something that looks realistic for the prompt it was given. For something like a hacking scene, an LLM can probably generate something that's actually much better than what the humans would make given the time and effort required. A hacking scene that a computer security professional would think is realistic is normally way beyond the required scope. But, an LLM can probably do one that is actually plausible for a computer security professional because of what that LLM has been trained on. But, it's still a prop. If there are any IP addresses or email addresses in the LLM-generated output they may or may not work. And, for a movie prop, it might actually be worse if they do work.

            When you're asking an AI something like "What does a selection sort algorithm look like in Rust?", what you're really doing is asking "What does a realistic answer to that question look like?" You're basically asking for a prop.

            Now, some props can be extremely realistic looking. Think of the cockpit of an airplane in a serious aviation drama. The props people will probably either build a very realistic cockpit, or maybe even buy one from a junkyard and fix it up. The prop will be realistic enough that even a pilot will look at it and say that it's correctly laid out and accurate. Similarly, if you ask an LLM to produce code for you, sometimes it will give you something that is realistic enough that it actually works.

            Having said that, fundamentally, there's a difference between "What is the answer to this question?" and "What would a realistic answer to this question look like?" And that's the fundamental flaw of LLMs. Answering a question requires understanding the question. Simulating an answer just requires pattern matching.

            A 1 Reply Last reply
            2
            • S [email protected]

              This is in no way new. 20 years ago I used to refer to some job postings as H1Bait because they'd have requirements that were physically impossible (like having 5 years experience with a piece of software <2 years old) specifically so they could claim they couldn't find anyone qualified (because anyone claiming to be qualified was definitely lying) to justify an H1B for which they would be suddenly way less thorough about checking qualifications.

              O This user is from outside of this forum
              O This user is from outside of this forum
              [email protected]
              wrote last edited by
              #131

              Yeah companies have always been abusing H1B, but it seems like only recently is it so hard for CS grads to find jobs. I didn't have much trouble in 2010 and it was easy to hop jobs for me the last 10 years.

              Now, not so much.

              1 Reply Last reply
              0
              • J [email protected]

                Yeah, this is my nightmare scenario. Code reviews are always the worst part of a programming gig, and they must get exponentially worse when the junior devs can crank out 100s of lines of code per commit with an LLM.

                merc@sh.itjust.worksM This user is from outside of this forum
                merc@sh.itjust.worksM This user is from outside of this forum
                [email protected]
                wrote last edited by
                #132

                Also, LLMs are essentially designed to produce code that will pass a code review. It's output that is designed to look as realistic as possible. So, not only do you have to look through the code for flaws, any error is basically "camouflaged".

                With a junior dev, sometimes their lack of experience is visible in the code. You can tell what to look at more closely based on where it looks like they're out of their comfort zone. Whereas an LLM is always 100% in its comfort zone, but has no clue what it's actually doing.

                1 Reply Last reply
                1
                • F [email protected]

                  Nah, it's the microplastics.

                  L This user is from outside of this forum
                  L This user is from outside of this forum
                  [email protected]
                  wrote last edited by
                  #133

                  Why not both ™?

                  I 1 Reply Last reply
                  5
                  • E [email protected]

                    I don't want to dismiss your point overall, but I see that example so often and it irks me so much.

                    Unit tests are your specification. So, 1) ideally you should write the specification before you implement the functionality. But also, 2) this is the one part where you really should be putting in your critical thinking to work out what the code needs to be doing.

                    An AI chatbot or autocomplete can aid you in putting down some of the boilerplate to have the specification automatically checked against the implementation. Or you could try to formulate the specification in plaintext and have an AI translate it into code. But an AI without knowledge of the context nor critical thinking cannot write the specification for you.

                    merc@sh.itjust.worksM This user is from outside of this forum
                    merc@sh.itjust.worksM This user is from outside of this forum
                    [email protected]
                    wrote last edited by
                    #134

                    Tests are probably both the best and worst things to use LLMs for.

                    They're the best because of all the boilerplate. Unit tests tend to have so much of that, setting things up and tearing it down. You want that to be as consistent as possible so that someone looking at it immediately understands what they're seeing.

                    OTOH, tests are also where you figure out how to attack your code from multiple angles. You really need to understand your code to think of all the ways it could fail. LLMs don't understand anything, so I'd never trust one to come up with a good set of things to test.

                    1 Reply Last reply
                    1
                    • underpantsweevil@lemmy.worldU [email protected]

                      It is not useless. You should absolutely continue to vibes code. Don't let a professional get involved at the ground floor. Don't inhouse a professional staff.

                      Please continue paying me $200/hr for months on end debugging your Baby's First Web App tier coding project long after anyone else can salvage it.

                      And don't forget to tell your investors how smart you are by Vibes Coding! That's the most important part. Secure! That! Series! B! Go public! Get yourself a billion dollar valuation on these projects!

                      Keep me in the good wine and the nice car! I love vibes coding.

                      A This user is from outside of this forum
                      A This user is from outside of this forum
                      [email protected]
                      wrote last edited by
                      #135

                      Not me, I'd rather work on a clean code base without any slop, even if it pays a little less. QoL > TC

                      M 1 Reply Last reply
                      12
                      • R [email protected]

                        Can someone tell me what vibe coding is?

                        merc@sh.itjust.worksM This user is from outside of this forum
                        merc@sh.itjust.worksM This user is from outside of this forum
                        [email protected]
                        wrote last edited by
                        #136

                        From what I understand, it's using an LLM for coding, but taken to an extreme. Like, a regular programmer might use an LLM to help them with something, but they'll read through the code the LLM produces, make sure they understand it, tweak it wherever it's necessary, etc. A vibe coder might not even be a programmer, they just get the LLM to generate some code and they run the code to see if it does what they want. If it doesn't, they talk to the LLM some more and generate some more code. At no point do they actually read through the code and try to understand it. They just run the program and see if it does what they want.

                        1 Reply Last reply
                        0
                        • A [email protected]

                          I'm entirely too trusting and would like to know what about the phrasing tips you off that it's fictional. Back on Reddit I remember so many claims about posts being fake and I was never able to tease out what distinguished the "omg fake! r/thathappened" posts from the ones that weren't accused of that, and I feel this is a skill I should be able to have on some level. Although taking an amusing post that wasn't real as real doesn't always have bad consequences.

                          But I mostly asked because I'm curious about the weird extra width on letters.

                          B This user is from outside of this forum
                          B This user is from outside of this forum
                          [email protected]
                          wrote last edited by
                          #137

                          Interesting. Curious for a point of comparison how The Onion reads to you.

                          (Only a mediocre point of comparison I fear, but)

                          A 1 Reply Last reply
                          1
                          • isaac@waterloolemmy.caI [email protected]
                            This post did not contain any content.
                            H This user is from outside of this forum
                            H This user is from outside of this forum
                            [email protected]
                            wrote last edited by [email protected]
                            #138

                            As a software developer, I've found some free LLMs to provide productivity boosts. It is a fairly hairpulling experience to not try too hard to get a bad LLM to correct itself, and learning to switch quickly from bad LLMs is a key skill in using them. A good model is still one that you can fix their broken code, and ask them to understand why what you provided them fixes it. They need a long context window to not repeat their mistakes. Qwen 3 is very good at this. Open source also means a future of customizing to domain, ie. language specific, optimizations, and privacy trust/unlimited use with enough local RAM, with some confidence that AI is working for you rather than data collecting for others. Claude Sonnet 4 is stronger, but limited free access.

                            The permanent side of high market cap US AI industry is that it will always be a vector for NSA/fascism empire supremacy, and Skynet goal, in addition to potentially stealing your input/output streams. The future for users who need to opt out of these threats, is local inference, and open source that can be customized to domains important to users/organizations. Open models are already at close parity, IMO from my investigations, and, relatively low hanging fruit, customization a certain path to exceeding parity for most applications.

                            No LLM can be trusted to allow you do to something you have no expertise in. This state will remain an optimistic future for longer than you hope.

                            D 1 Reply Last reply
                            32
                            • A [email protected]

                              I'm entirely too trusting and would like to know what about the phrasing tips you off that it's fictional. Back on Reddit I remember so many claims about posts being fake and I was never able to tease out what distinguished the "omg fake! r/thathappened" posts from the ones that weren't accused of that, and I feel this is a skill I should be able to have on some level. Although taking an amusing post that wasn't real as real doesn't always have bad consequences.

                              But I mostly asked because I'm curious about the weird extra width on letters.

                              B This user is from outside of this forum
                              B This user is from outside of this forum
                              [email protected]
                              wrote last edited by [email protected]
                              #139
                              1 Reply Last reply
                              0
                              • H [email protected]

                                As a software developer, I've found some free LLMs to provide productivity boosts. It is a fairly hairpulling experience to not try too hard to get a bad LLM to correct itself, and learning to switch quickly from bad LLMs is a key skill in using them. A good model is still one that you can fix their broken code, and ask them to understand why what you provided them fixes it. They need a long context window to not repeat their mistakes. Qwen 3 is very good at this. Open source also means a future of customizing to domain, ie. language specific, optimizations, and privacy trust/unlimited use with enough local RAM, with some confidence that AI is working for you rather than data collecting for others. Claude Sonnet 4 is stronger, but limited free access.

                                The permanent side of high market cap US AI industry is that it will always be a vector for NSA/fascism empire supremacy, and Skynet goal, in addition to potentially stealing your input/output streams. The future for users who need to opt out of these threats, is local inference, and open source that can be customized to domains important to users/organizations. Open models are already at close parity, IMO from my investigations, and, relatively low hanging fruit, customization a certain path to exceeding parity for most applications.

                                No LLM can be trusted to allow you do to something you have no expertise in. This state will remain an optimistic future for longer than you hope.

                                D This user is from outside of this forum
                                D This user is from outside of this forum
                                [email protected]
                                wrote last edited by
                                #140

                                I think the key to good LLM usage is a light touch. Let the LLM know what you want, maybe refine it if you see where the result went wrong. But if you find yourself deep in conversation trying to explain to the LLM why it's not getting your idea, you're going to wind up with a bad product. Just abandon it and try to do the thing yourself or get someone who knows what you want.

                                They get confused easily, and despite what is being pitched, they don't really learn very well. So if they get something wrong the first time they aren't going to figure it out after another hour or two.

                                M H 2 Replies Last reply
                                10
                                • L [email protected]

                                  Why not both ™?

                                  I This user is from outside of this forum
                                  I This user is from outside of this forum
                                  [email protected]
                                  wrote last edited by
                                  #141

                                  With a pinch of PFAS for good measure?

                                  1 Reply Last reply
                                  0
                                  • F [email protected]

                                    Nah, it's the microplastics.

                                    C This user is from outside of this forum
                                    C This user is from outside of this forum
                                    [email protected]
                                    wrote last edited by
                                    #142

                                    Microplastics are stored in the balls.

                                    1 Reply Last reply
                                    2
                                    • C [email protected]

                                      No idea, but I am not sure your family member is qualified. I would estimate that a coding LLM can code as well as a fresh CS grad. The big advantage that fresh grads have is that after you give them a piece of advice once or twice, they stop making that same mistake.

                                      mrrazamataz@lemmy.razbot.xyzM This user is from outside of this forum
                                      mrrazamataz@lemmy.razbot.xyzM This user is from outside of this forum
                                      [email protected]
                                      wrote last edited by
                                      #143

                                      What's this based on? Have you met a fresh CS graduate and compared them to an LLM? Does it not vary person to person? Or fuck it, LLM to LLM? Calling them not qualified seems harsh when it's based on sod all.

                                      1 Reply Last reply
                                      3
                                      • B [email protected]

                                        Interesting. Curious for a point of comparison how The Onion reads to you.

                                        (Only a mediocre point of comparison I fear, but)

                                        A This user is from outside of this forum
                                        A This user is from outside of this forum
                                        [email protected]
                                        wrote last edited by [email protected]
                                        #144

                                        That's a bit difficult because I already go into anything from The Onion knowing it's intended to be humorous/satirical.

                                        What I lack in ability to recognize satire or outright deception from posts written online, I make up for by reading comment threads: seeing people accuse things of being fake, seeing people defend it as true, seeing people point out the entire intention of a website is satire, seeing people who had a joke go over their heads get it explained... relying on the collective hivemind to help me out where I am deficient. It's not a perfect solution at all, especially since people can judge wrong—I bet some "omg so fake" threads were actually real, and some astroturf-type things written to influence others without real experience behind it got through as real.

                                        1 Reply Last reply
                                        1
                                        • D [email protected]

                                          I think the key to good LLM usage is a light touch. Let the LLM know what you want, maybe refine it if you see where the result went wrong. But if you find yourself deep in conversation trying to explain to the LLM why it's not getting your idea, you're going to wind up with a bad product. Just abandon it and try to do the thing yourself or get someone who knows what you want.

                                          They get confused easily, and despite what is being pitched, they don't really learn very well. So if they get something wrong the first time they aren't going to figure it out after another hour or two.

                                          M This user is from outside of this forum
                                          M This user is from outside of this forum
                                          [email protected]
                                          wrote last edited by
                                          #145

                                          In my experience, they're better at poking holes in code than writing it, whether that's green or brownfield.

                                          I've tried to get it to make sections of changes for me, and it feels very productive, but when I time myself I find I spend probably more time correcting the LLM's work than if I'd just written it myself.

                                          But if you ask it to judge a refactor, then you might actually get one or two good points. You just have to really be careful to double check its assertions if you're unfamiliar with anything, because it will lead you to some real boners if you just follow it blindly.

                                          L 1 Reply Last reply
                                          5
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups