New Junior Developers Can’t Actually Code.
-
When I had to get up to speed on a new language, it was very helpful. It's also great to write low to medium complexity scripts in python, powershell, bash, and making ansible tasks. That said I've been programming for ~30 years, and could have done those things myself if I needed, but it would take some time (a lot of it being looking up documentation and writing boilerplate code). It's also nice to write C# unit tests.
However, the times I've been stuck on my main languages, it's been utterly useless.
ChatGPT is extremely useful if you already know what you're doing. It's garbage if you're relying on it to write code for you. There are nearly always bugs and edge cases and hallucinations and version mismatches.
It's also probably useful for looking like you kinda know what you're doing as a junior in a new project. I've seen some shit in code reviews that was clearly AI slop. Usually from exactly the developers you expect.
-
What are you guys working on where chatgpt can figure it out? Honestly, I haven't been able to get a scrap of working code beyond a trivial example out of that thing or any other LLM.
I used it a few days ago to translate a math formula into code.
Here is the formula:
https://wikimedia.org/api/rest_v1/media/math/render/svg/126b6117904ad47459ad0caa791f296e69621782It's not the most complicated thing. I could have done it. But it would take me some time. I just input the formula directly, the desired language and the result was well done and worked flawlessly.
It saved me some time.
-
I've said it before, but this is a 20-year-old problem.
After Y2K, all those shops that over-porked on devs began shedding the most pricey ones; worse in 'at will' states.
Who were those devs? Mentors. They shipped less code, closed fewer tickets, cost more, but their value wasn't in tickets and code: it was investing in the next generation. And they had to go because #numbersGoUp
And they left. And the first gen of devs with no mentorship joined and started their careers. No idea about edge cases, missing middles or memory management. No lint, no warnings, build and ship and fix the bugs as they come.
And then another generation. And these were the true 'lost boys' of dev. C is dumb, C++ is dumb, perl is dumb, it's all old, supply chain exploits don't exist, I made it go so I'm done, fuck support, look at my numbers. It's all low-attention span, baling wire and trophies because #numbersGoUp.
And let's be fair: they're good at this game, the new way of working where it's a fast finish, a head-pat, and someone else's problem. That's what the companies want, and that's what they built.
They say now that relying on Ai makes one never really exercise critical thought and problem-solving, and I see it when I'm forced to write fucking YAML for fucking Ansible. I let the GPTs do that for me, without worrying that I won't learn to code YAML for Ansible. Coding YAML for Ansible is NEVER going to be on my list of things I want to remember. But we're seeing people do that with actual work; with go and rust code, and yeah, no concept of why we want to check for completeness let alone a concept of how.
One day these new devs will proudly install a patch in the RTOS flashed into your heart monitor and that annoying beep will go away. Sleep tight.
No one wants mentors. The way to move you in IT is to switch jibes every 24 months. So when you're paying mentors huge salaries to train juniors who are velocity drags into velocity boosters, you do it knowing their are going to leave and take all that investment with them to a higher paycheck.
I don't say this is right, but that's the reality from the paycheck side is things and I think there needs to be radical change for both sides. Like a trade union or something. Union takes responsibility for certifying skills and suitability, companies can be more confident of hires, juniors have mentors to lean from, mentors ensure juniors have aptitude and intellectual curiosity necessary to do the job well, and I guess pay is more skill/experience based so developers don't have to hop jobs to get paid what they are worth.
-
What are you guys working on where chatgpt can figure it out? Honestly, I haven't been able to get a scrap of working code beyond a trivial example out of that thing or any other LLM.
Agreed. I wanted to test a new config in my router yesterday, which is configured using scripts. So I thought it would be a good idea for ChatGPT to figure it out for me, instead of 3 hours of me reading documentation and trying tutorials. It was a test scenario, so I thought it might do well.
It did not do well at all. The scripts were mostly correct but often in the wrong order (referencing a thing before actually defining it). Sometimes the syntax would be totally wrong and it kept mixing version 6 syntax with version 7 syntax (I'm on 7). It will also make mistakes and when I point out the mistake it says Oh you are totally right, I made a mistake. Then goes on to explain what mistake it did and output new code. However more often than not the new code contained the exact same mistake. This is probably because of a lack of training data, where it is referencing only one example and that example just had a mistake in it.
In the end I gave up on ChatGPT, searched for my testscenario and it turned out a friendly dude on a forum put together a tutorial. So I followed that and it almost worked right away. A couple of minutes of tweaking and testing and I got it working.
I'm afraid for a future where forums and such don't exist and sources like Reddit get fucked and nuked. In an AI driven world the incentive for creating new original content is way lower. So when AI doesn't know the answer, you are just hooped and have to re-invent the wheel yourself. In the long run this will destroy productivity and not give the gains people are hoping for at the moment.
-
What are you guys working on where chatgpt can figure it out? Honestly, I haven't been able to get a scrap of working code beyond a trivial example out of that thing or any other LLM.
I'm forced to use Copilot at work and as far as code completion goes, it gets it right 10-15% of the times... the rest of the time it just suggests random — credible-looking — noise or hallucinates variables and shit.
-
We have the same problem with literacy here in Sweden. It’s unnerving to think that these kids will need to become doctors, lawyers and police officers in the future.
Sweden of all places? What happened in the last decade that Sweden's slowly losing the fame of country to follow in social aspects?
-
No one wants mentors. The way to move you in IT is to switch jibes every 24 months. So when you're paying mentors huge salaries to train juniors who are velocity drags into velocity boosters, you do it knowing their are going to leave and take all that investment with them to a higher paycheck.
I don't say this is right, but that's the reality from the paycheck side is things and I think there needs to be radical change for both sides. Like a trade union or something. Union takes responsibility for certifying skills and suitability, companies can be more confident of hires, juniors have mentors to lean from, mentors ensure juniors have aptitude and intellectual curiosity necessary to do the job well, and I guess pay is more skill/experience based so developers don't have to hop jobs to get paid what they are worth.
Yeah those job hoppers are the worst. You can always tell right away what kind of person those are. I've had to work with a "senior" dev who had 15 years of experience and to be honest he sucked at his job. He couldn't do simple tasks, didn't think before he started writing code and often got stuck asking other people for help. But he got paid big bucks, because all he did his entire career was work somewhere for 2-3 years and then job hop and trade up. By the time the company figured out the dude was useless, he went on to the next company.
Such a shitty attitude, which is a shame because he was a good dude otherwise. I got along with him on a personal level. And honestly good on him for making the most he can, fuck the company. But I personally couldn't do that, I take pride in my work.
-
This post did not contain any content.
Not in any way a new phenomenon, there's a reason fizzbuzz was invented, there's been a steady stream of CS graduates who can't code their way out of a wet paper bag ever since the profession hit the mainstream.
Actually fucking interview your candidates, especially if you're sourcing candidates from a country with for-profit education and/or rote learning cultures, both of which suck when it comes to failing people who didn't learn anything. No BS coding tests go for "explain this code to me" kind of stuff, worst case they can understand code but suck at producing it, that's still prime QA material right there.
-
Sweden of all places? What happened in the last decade that Sweden's slowly losing the fame of country to follow in social aspects?
This is only a guess, but it could be related to increased use of technology. Many things we interact with are simplified, and if you come across a word you don’t know your phone can give you simple synonyms or if you can’t spell autocorrect will catch it.
The same problem people are talking about with LLMs with a different lens.
-
This post did not contain any content.
Very "back in my day" energy.
I do not support AI but programming is about solving problems and not writing code.
If we are concentrating on tool, no developers and use punched card as well. Is that a bad thing?
-
Very "back in my day" energy.
I do not support AI but programming is about solving problems and not writing code.
If we are concentrating on tool, no developers and use punched card as well. Is that a bad thing?
You're right in that the goal is problem solving, you're wrong that inability to code isn't a problem.
AI can make a for loop and do common tasks but the moment you have something halfway novel to do, it has a habit of shitting itself and pretending that the feces is good code. And if you can't read code, you can't tell the shit from the stuff you want.
It may be able to do it in the future but it can't yet
Source: data engineer who has fought his AI a time or two.
-
When I had to get up to speed on a new language, it was very helpful. It's also great to write low to medium complexity scripts in python, powershell, bash, and making ansible tasks. That said I've been programming for ~30 years, and could have done those things myself if I needed, but it would take some time (a lot of it being looking up documentation and writing boilerplate code). It's also nice to write C# unit tests.
However, the times I've been stuck on my main languages, it's been utterly useless.
I love asking AI to generate a framework / structure for a project that I then barely use and then realize I shoulda just done it myself
-
This post did not contain any content.
Stack Overflow and Google were once the "AI" of the previous generation. "These kids can't code, they just copy what others have done"
-
What are you guys working on where chatgpt can figure it out? Honestly, I haven't been able to get a scrap of working code beyond a trivial example out of that thing or any other LLM.
I've been using (mostly) Claude to help me write an application in a language I'm not experienced with (Rust). Mostly with helping me see what I did wrong with syntax or with the borrow checker. Coming from Java, Python, and C/C++, it's very easy to mismanage memory the exact way Rust requires it.
That being said, any new code that generates for me I end up having to fix 9 times out of 10. So in a weird way I've been learning more about Rust from having to correct code that's been generated by an LLM.
I still think LLMs for the next while will be mostly useful as a hyper-spell checker for code, and not for generating new code. I often find that I would have saved time if I just tackled the problem myself and not tried to reply on an LLM. Although sometimes an LLM can give me an idea on how to solve a problem.
-
Not in any way a new phenomenon, there's a reason fizzbuzz was invented, there's been a steady stream of CS graduates who can't code their way out of a wet paper bag ever since the profession hit the mainstream.
Actually fucking interview your candidates, especially if you're sourcing candidates from a country with for-profit education and/or rote learning cultures, both of which suck when it comes to failing people who didn't learn anything. No BS coding tests go for "explain this code to me" kind of stuff, worst case they can understand code but suck at producing it, that's still prime QA material right there.
We do two "code challenges":
- Very simple, many are done in 5 min; this just weeds out the incompetent applicants, and 90% of the code is written (i.e. simulate working in an existing codebase)
- Ambiguous requirements, the point is to ask questions, and we actually have different branches depending on assumptions they made (to challenge their assumptions); i.e. simulate building a solution with product team
The first is in the first round, the second is in the technical interview. Neither are difficult, and we provide any equations they'll need.
It's much more important that they can reason about requirements than code something quick, because life won't give you firm requirements, and we don't want a ton of back and forth with product team if we can avoid it, so we need to catch most of that at the start.
In short, we're looking for actual software engineers, not code monkeys.
-
What are you guys working on where chatgpt can figure it out? Honestly, I haven't been able to get a scrap of working code beyond a trivial example out of that thing or any other LLM.
Same. It can generate credible-looking code, but I don't find it very useful. Here's what I've tried:
- describe a function - takes longer to read the explanation than grok the code
- generate tests - hallucinates arguments, doesn't do proper boundary checks, etc
- looking up docs - mostly useful to find search terms for the real docs
The second was kind of useful since it provided the structure, but I still replaced 90% of it.
I'm still messing with it, but beyond solving "blank page syndrome," it's not that great. And for that, I mostly just copy something from elsewhere in the project anyway, which is often faster than going to the LLM.
-
Junior Dev's could never code, yes including us
Agreed. I was hired for my first job due to an impressive demo, and making that demo became my job. I got there, but I produced a ton of tech debt in the process.
-
How is it more efficient than reading a static page? The kids can't read.
They weren't taught phonics, they were taught to guess the and with context clues. It's called "whole language" or "balanced reading"Really? My kids are hitting the rules hard. In 1st grade, they're learning pronunciation rules I never learned (that's phonics, right?). My 2nd grader is reading the 4th Harry Potter book, and my 5th grader finished the whole series in 3rd grade and is reading at a 7th or 8th grade level.
I did teach them to read before kindergarten (just used a book for 2-3 months of 10 min lessons), but that's it, everything else is school and personal interest. They can both type reasonably well because they use the Minecraft console and chat. They're great at puzzles, and my 5th grader beat me at chess (I tried a wonky opening, and he punished me), which they learned at school (extra curricular, but run by a teacher).
We love our charter school, though I don't think it's that different from the public school.
-
and I see it when I'm forced to write fucking YAML for fucking Ansible. I let the GPTs do that for me, without worrying that I won't learn to code YAML for Ansible. Coding YAML for Ansible is NEVER going to be on my list of things I want to remember.
Feels like this is the attitude towards programming in general nowadays.
To be fair, YAML sucks. It's a config language that someone thought should cover everything, but excel at nothing.
Just use TOML, JSON, or old-school INI. YAML will just give you an aneurism. Use the best tool for the job, which is often not the prettiest one.
Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.
Antoine de Saint-Exupéry
Kids these days with their fancy stuff, you don't need all that to write good software. YAML is the quintessential "jack of all trades, master of none" nonsense. It's a config file, just make it easy to parse and document how to edit it. That's it.
-
We do two "code challenges":
- Very simple, many are done in 5 min; this just weeds out the incompetent applicants, and 90% of the code is written (i.e. simulate working in an existing codebase)
- Ambiguous requirements, the point is to ask questions, and we actually have different branches depending on assumptions they made (to challenge their assumptions); i.e. simulate building a solution with product team
The first is in the first round, the second is in the technical interview. Neither are difficult, and we provide any equations they'll need.
It's much more important that they can reason about requirements than code something quick, because life won't give you firm requirements, and we don't want a ton of back and forth with product team if we can avoid it, so we need to catch most of that at the start.
In short, we're looking for actual software engineers, not code monkeys.
Those are good approaches, I would note that the "90% is written" one is mostly about code comprehension, not writing (as in: Actually architect something), and the requirement thing is a thing that you should, IMO, learn as a junior, it's not a prerequisite. It needs a lot of experience, and often domain knowledge new candidates have no chance of having. But, then, throwing such stuff at them and then judging them by their approach, not end result, should be fair.
The main question I ask myself, in general, is "can this person look at code from different angles". Somewhat like rotating a cube in your mind's eye if you get what I mean. And it might even that they're no good at it, but they demonstrate the ability when talking about coffee making. People who don't get lost when you're talking about cash registers having a common queue having better overall latency than cash registers with individual queues. Just as a carpenter would ask someone "do you like working with your hands", the question is "do you like to rotate implication structures in your mind".