New Junior Developers Can’t Actually Code.
-
Feels like it would be quicker and easier just to write the code myself at that point...
-
You and I read a very different comment, apparently. There was nothing there saying new is bad. Maybe read it again.
-
I remember talking to someone about where LLMs are and aren't useful. I pointed out that LLMs would be absolutely worthless for me as my work mostly consists of interacting with company-internal APIs, which the LLM obviously hasn't been trained on.
The other person insisted that that is exactly what LLMs are great at. They wouldn't explain how exactly the LLM was supposed to know how my company's internal software, which is a trade secret, is structured.
But hey, I figured I'd give it a go. So I fired up a local Llama 3.1 instance and asked it how to set up a local copy of ASDIS, one such internal system (name and details changed to protect the innocent). And Llama did give me instructions... on how to write the American States Data Information System, a Python frontend for a single MySQL table containing basic information about the member states of the USA.
Oddly enough, that's not what my company's ASDIS is. It's almost as if the LLM had no idea what I was talking about. Words fail to express my surprise at this turn of events.
-
I'm a little defeatist about it. I saw with my own 3 eyes how a junior asked ChatGPT how to insert something into an
std::unordered_map
. I tell them about cppreference. The little shit tells me "Sorry unc, ChatGPT is objectively more efficient". I almost blew a fucking gasket, mainly cuz I'm not that god damn old. I don't care how much you try to convince me that LLMs are efficient, there is no shot they are more efficient than opening a static page with all the info you would ever need. Not even considering energy efficiency. Utility aside, the damage we have dealt to developing minds is irreversible. We have convinced them that thought is optional. This is gonna bite us in the ass. Hard. -
What are you guys working on where chatgpt can figure it out? Honestly, I haven't been able to get a scrap of working code beyond a trivial example out of that thing or any other LLM.
-
I have seen this too much. My current gripe isn't fresh devs, as long as they are teachable and care.
My main pain over the last several years has been the bulk of 'give-no-shit' perms/contractors who don't want to think or try when they can avoid it.
They run a web of lies until it is no longer sustainable (or the project is done for contractors) and then again its someone else's problem.
There are plenty of 10/20 year plus and devs who don't know what they are doing and don't care whose problem it will be as long as it isnt theirs.
I'm sick of writing coding 101 standards for 1k+ a day 'experts'. More sick of PR feedback where it's a battle to get things done in a maintainable manner from said 'experts'.
-
How is it more efficient than reading a static page? The kids can't read.
They weren't taught phonics, they were taught to guess the and with context clues. It's called "whole language" or "balanced reading" -
Junior Dev's could never code, yes including us
-
Yeah, and the way it will confidently give you a wrong answer instead of either asking for more information or saying it just doesn't know is equally annoying.
-
When I had to get up to speed on a new language, it was very helpful. It's also great to write low to medium complexity scripts in python, powershell, bash, and making ansible tasks. That said I've been programming for ~30 years, and could have done those things myself if I needed, but it would take some time (a lot of it being looking up documentation and writing boilerplate code). It's also nice to write C# unit tests.
However, the times I've been stuck on my main languages, it's been utterly useless.
-
Literacy rates are on a severe decline in the US, AI is only going to make that worse.
Over half of Americans between 16 and 74 read below a 6th grade level (that's below the expected reading level of an 11 year old!)
-
and I see it when I'm forced to write fucking YAML for fucking Ansible. I let the GPTs do that for me, without worrying that I won't learn to code YAML for Ansible. Coding YAML for Ansible is NEVER going to be on my list of things I want to remember.
Feels like this is the attitude towards programming in general nowadays.
-
I work in a small company that doesn't hire hardly at all... Stories like this scare me because I have no way to personally quantify how common that kind of attitude might be.
-
We have the same problem with literacy here in Sweden. It’s unnerving to think that these kids will need to become doctors, lawyers and police officers in the future.
-
This is exactly right. AI can only interpolate between datapoints. I used to write code for research papers and chat gpt couldn’t understand a thing I asked of it.
-
It's going to get worse. I suspect that this'll end with LLM taking the part of a production programs. Juniors just feeding it scenarios to follow, hook the thing up to a database and web page and let it run. It'll gobble power like there's no tomorrow and is just a nightmare to maintain, but goes live in a quarter if the time so every manager goes with that.
-
Agreed. A few year back the devs looking for quick fixes would go over to StackOverflow and just copy answers without reading explanations. This caused the same type of problems that OP is talking about. That said, the ease of AI might be making things even worse.
-
Look, ultimately the problem is the same as it has always been: juniors doing junior shit. There's just more of it going on. If you're hiring one, you put a senior on them ready to extinguish fires. A good review process is a must.
Now that I think about it, there was this one time the same young'un I was talking about tried to commit this insane subroutine that was basically resizing a vector in the most roundabout way imaginable. Probably would have worked, but you can also just use the
resize
method, y'know? In retrospect, that was probably some Copilot bullshit, but because we have a review process in place, it was never an issue. -
Because giving answers is not a LLM's job. A LLM's job is to generate text that looks like an answer. And we then try to coax framework that into generating correct answers as often as possible, with mixed results.
-
ChatGPT is extremely useful if you already know what you're doing. It's garbage if you're relying on it to write code for you. There are nearly always bugs and edge cases and hallucinations and version mismatches.
It's also probably useful for looking like you kinda know what you're doing as a junior in a new project. I've seen some shit in code reviews that was clearly AI slop. Usually from exactly the developers you expect.