New Junior Developers Can’t Actually Code.
-
Literacy rates are on a severe decline in the US, AI is only going to make that worse.
Over half of Americans between 16 and 74 read below a 6th grade level (that's below the expected reading level of an 11 year old!)
-
and I see it when I'm forced to write fucking YAML for fucking Ansible. I let the GPTs do that for me, without worrying that I won't learn to code YAML for Ansible. Coding YAML for Ansible is NEVER going to be on my list of things I want to remember.
Feels like this is the attitude towards programming in general nowadays.
-
I work in a small company that doesn't hire hardly at all... Stories like this scare me because I have no way to personally quantify how common that kind of attitude might be.
-
We have the same problem with literacy here in Sweden. It’s unnerving to think that these kids will need to become doctors, lawyers and police officers in the future.
-
This is exactly right. AI can only interpolate between datapoints. I used to write code for research papers and chat gpt couldn’t understand a thing I asked of it.
-
It's going to get worse. I suspect that this'll end with LLM taking the part of a production programs. Juniors just feeding it scenarios to follow, hook the thing up to a database and web page and let it run. It'll gobble power like there's no tomorrow and is just a nightmare to maintain, but goes live in a quarter if the time so every manager goes with that.
-
Agreed. A few year back the devs looking for quick fixes would go over to StackOverflow and just copy answers without reading explanations. This caused the same type of problems that OP is talking about. That said, the ease of AI might be making things even worse.
-
Look, ultimately the problem is the same as it has always been: juniors doing junior shit. There's just more of it going on. If you're hiring one, you put a senior on them ready to extinguish fires. A good review process is a must.
Now that I think about it, there was this one time the same young'un I was talking about tried to commit this insane subroutine that was basically resizing a vector in the most roundabout way imaginable. Probably would have worked, but you can also just use the
resize
method, y'know? In retrospect, that was probably some Copilot bullshit, but because we have a review process in place, it was never an issue. -
Because giving answers is not a LLM's job. A LLM's job is to generate text that looks like an answer. And we then try to coax framework that into generating correct answers as often as possible, with mixed results.
-
ChatGPT is extremely useful if you already know what you're doing. It's garbage if you're relying on it to write code for you. There are nearly always bugs and edge cases and hallucinations and version mismatches.
It's also probably useful for looking like you kinda know what you're doing as a junior in a new project. I've seen some shit in code reviews that was clearly AI slop. Usually from exactly the developers you expect.
-
I used it a few days ago to translate a math formula into code.
Here is the formula:
https://wikimedia.org/api/rest_v1/media/math/render/svg/126b6117904ad47459ad0caa791f296e69621782It's not the most complicated thing. I could have done it. But it would take me some time. I just input the formula directly, the desired language and the result was well done and worked flawlessly.
It saved me some time.
-
No one wants mentors. The way to move you in IT is to switch jibes every 24 months. So when you're paying mentors huge salaries to train juniors who are velocity drags into velocity boosters, you do it knowing their are going to leave and take all that investment with them to a higher paycheck.
I don't say this is right, but that's the reality from the paycheck side is things and I think there needs to be radical change for both sides. Like a trade union or something. Union takes responsibility for certifying skills and suitability, companies can be more confident of hires, juniors have mentors to lean from, mentors ensure juniors have aptitude and intellectual curiosity necessary to do the job well, and I guess pay is more skill/experience based so developers don't have to hop jobs to get paid what they are worth.
-
Agreed. I wanted to test a new config in my router yesterday, which is configured using scripts. So I thought it would be a good idea for ChatGPT to figure it out for me, instead of 3 hours of me reading documentation and trying tutorials. It was a test scenario, so I thought it might do well.
It did not do well at all. The scripts were mostly correct but often in the wrong order (referencing a thing before actually defining it). Sometimes the syntax would be totally wrong and it kept mixing version 6 syntax with version 7 syntax (I'm on 7). It will also make mistakes and when I point out the mistake it says Oh you are totally right, I made a mistake. Then goes on to explain what mistake it did and output new code. However more often than not the new code contained the exact same mistake. This is probably because of a lack of training data, where it is referencing only one example and that example just had a mistake in it.
In the end I gave up on ChatGPT, searched for my testscenario and it turned out a friendly dude on a forum put together a tutorial. So I followed that and it almost worked right away. A couple of minutes of tweaking and testing and I got it working.
I'm afraid for a future where forums and such don't exist and sources like Reddit get fucked and nuked. In an AI driven world the incentive for creating new original content is way lower. So when AI doesn't know the answer, you are just hooped and have to re-invent the wheel yourself. In the long run this will destroy productivity and not give the gains people are hoping for at the moment.
-
I'm forced to use Copilot at work and as far as code completion goes, it gets it right 10-15% of the times... the rest of the time it just suggests random — credible-looking — noise or hallucinates variables and shit.
-
Sweden of all places? What happened in the last decade that Sweden's slowly losing the fame of country to follow in social aspects?
-
Yeah those job hoppers are the worst. You can always tell right away what kind of person those are. I've had to work with a "senior" dev who had 15 years of experience and to be honest he sucked at his job. He couldn't do simple tasks, didn't think before he started writing code and often got stuck asking other people for help. But he got paid big bucks, because all he did his entire career was work somewhere for 2-3 years and then job hop and trade up. By the time the company figured out the dude was useless, he went on to the next company.
Such a shitty attitude, which is a shame because he was a good dude otherwise. I got along with him on a personal level. And honestly good on him for making the most he can, fuck the company. But I personally couldn't do that, I take pride in my work.
-
Not in any way a new phenomenon, there's a reason fizzbuzz was invented, there's been a steady stream of CS graduates who can't code their way out of a wet paper bag ever since the profession hit the mainstream.
Actually fucking interview your candidates, especially if you're sourcing candidates from a country with for-profit education and/or rote learning cultures, both of which suck when it comes to failing people who didn't learn anything. No BS coding tests go for "explain this code to me" kind of stuff, worst case they can understand code but suck at producing it, that's still prime QA material right there.
-
This is only a guess, but it could be related to increased use of technology. Many things we interact with are simplified, and if you come across a word you don’t know your phone can give you simple synonyms or if you can’t spell autocorrect will catch it.
The same problem people are talking about with LLMs with a different lens.
-
Very "back in my day" energy.
I do not support AI but programming is about solving problems and not writing code.
If we are concentrating on tool, no developers and use punched card as well. Is that a bad thing?
-
You're right in that the goal is problem solving, you're wrong that inability to code isn't a problem.
AI can make a for loop and do common tasks but the moment you have something halfway novel to do, it has a habit of shitting itself and pretending that the feces is good code. And if you can't read code, you can't tell the shit from the stuff you want.
It may be able to do it in the future but it can't yet
Source: data engineer who has fought his AI a time or two.