Tech's Dumbest Mistake: Why Firing Programmers for AI Will Destroy Everything
-
Well, yeah, but those costs are for tomorrow's executive to figure out, we need those profits NOW
-
While true, that is a weak analogy. Software rots and needs constant attention of competent people or shit stacks.
-
In my experience, you actually need more people to maintain and extend existing software compared to the initial build out.
Usually because of scalability concerns, increasing complexity of the system and technical debt coming due.
-
The irony of using an AI generated image for this post...
AI imagery makes any article look cheaper in my view, I am more inclined to "judge the book by its cover".
Why would you slap something so lazy on top of a piece of writing you (assuming it isn't also written by AI) put time and effort into?
-
I work for a fortune 500 company.
just recently lost a principal engineer that built an entire platform over the last four years.
just before they left I noticed they were using AI an awful lot. like...a lot a lot. like, "I don't know the answer on a screen share so I'll ask ChatGPT how to solve the problem and copy/paste it directly into the environment until it works" a lot.
they got fired for doing non-related shit.
it's taken us three months, hundreds of hours from at least 5 other principal engineers to try to unravel this bullshit and we're still not close.
the contributions and architecture scream AI all over it.
Point is. I'll happily let idiots destroy the world of software because I'll make fat bank later as a consultant fixing their bullshit.
-
The cloud provides incredible flexibility, scale, and reliability. It is expensive to have 3+ data centers with a datacenter staff. If the data center was such a great deal for the many 9s of reliability provided by the cloud, company’s would be shifting back in mass at this point
-
yeah, there are many things its easier to just give up having the ai do it. even if you somehow succeed it will likely be such mess it gives you its not worth it
-
Oh no way. It was a year(s)-long process to get to the cloud, then the devs got hooked on all the toys AWS was giving them and got strapped in even further. They couldn't get out now if they wanted to. Not without huge expense and re-writing a bunch of stuff. No CTO is going die on that hill.
They jumped in the cloud for the same reason they jumped into AI - massive hype. Only the cloud worked. And now % of the profits are all Amazon's. No app store needed. MuwAHhahahAhahahaa
-
I thought it was intentional AI slop
-
Most extension today is enshitification. We've also seen major platforms scale to the size of Earth.
If you're only going to maintain and don't have a plan on adding features outside of duct taping AI to the software, what use is it maintaining a dev team at the size you needed it to be when creating new code?
-
I'm not saying you can fire everyone, but the maintenance team doesn't need to be the size of the development team if the goal is to only maintain features.
-
AI mostly seems useful when you don't know a specific concept and just need the base ideas. That said, given it's often confidently wrong and doesn't involve humans actively steering you toward better ideas, I still find Stack Overflow more helpful. Sometimes the answer to your problem is to stop doing what you are trying to do and attack the problem from a different angle.
-
Getting the real requirements nailed down from the start is critical, not just doing the work the customer asked for. Otherwise, you are get 6 months into a project and start talking about scrapping 90% of the completed work as the requirements from the get-go were bad. The customer never fundamentally understood the problem and you never bothered to ask. Now everyone is mad and you lost a repeat customer.
-
I find it's the most useful when asking it for small snippets of code or dealing with boilerplate stuff. Anything more complicated usually results in something broken.
-
There's also the tribal knowledge of people who've worked somewhere for a few years. There's always a few people who just know where or how a particular thing works and why it works that way. AI simply cannot replace that.
-
Institutional knowledge takes years to replace.
-
I also find it is best when I'm very specific and give as many identifiers as possible. App name, version, OS, quoted error code, etc.
-
I know that it's a meme to hate on generated images people need to understand just how much that ship has sailed.
Getting upset at generative AI is about as absurd as getting upset at CGI special effects or digital images. Both of these things were the subject of derision when they started being widely used. CGI was seen as a second rate knockoff of "real" special effects and digital images were seen as the tool of amateur photographers with their Photoshop, acting as a crutch in place of photography talent.
No amount of arguments film purist or nostalgia for the old days of puppets and models in movies was going to stop computer graphics and digital images capture and manipulation. Today those arguments seem so quaint and ignorant that most people are not even aware that there was even a controversy.
Digital images and computer graphics have nearly completely displaced film photography and physical model-based special effects.
Much like those technologies, generative AI isn't going away and it's only going to improve and become more ubiquitous.
This isn't the hill to die on no matter how many upvoted you get.
-
This is a bad analogy.
It would be more akin to firing your fire departments, because you installed automatic hoses in front of everyone’s homes. When a fire starts, the hoses will squirt water towards the fire, but sometimes it’ll miss, sometimes it’ll squirt backwards, sometimes it’ll squirt the neighbour’s house, and sometimes it’ll squirt the fire.
-
Companies that are incompetently led will fail and companies that integrate new AI tools in a productive and useful manner will succeed.
Worrying about AI replacing coders is pointless. Anyone who writes code for a living understands the limitations that these models have. It isn't going to replace humans for quite a long time.
Language models are hitting some hard limitations and were unlikely to see improvements continue at the same pace.
Transformers, Mixture of Experts and some training efficiency breakthroughs all happened around the same time which gave the impression of an AI explosion but the current models are essentially taking advantage of everything and we're seeing pretty strong diminishing returns on larger training sets.
So language models, absent a new revolutionary breakthrough, are largely as good as they're going to get for the foreseeable future.
They're not replacing software engineers, at best they're slightly more advanced syntax checkers/LSPs. They may help with junior developer level tasks like refactoring or debugging... but they're not designing applications.