The picture of software development also looks completely different. Code that used to be readable in a few lines becomes 100 lines—overblown because, well, code is cheap. Now, I could argue that it makes things unreadable and so on, but honestly, who cares? Right? The AI can fix it if it breaks...
So what do you guys think? Is this the future? Maybe the skill to focus on is orchestrating AI, and if you don’t do that, you become a legacy developer—someone with COBOL-like skills—still needed, but from the past millennium.
Juniors don't have that skillset yet, but they're being pushed to use AI because their peers are using it. Where do you draw the line?
What will happen when the current senior developers start retiring? What will happen when a new technology shows up that LLMs don't have human-written code to be trained on? Will pure LLM reasoning and generated agent skills be enough to bridge the gap?
It's all very interesting questions about the future of the development process.
1. AI gets better enough fast enough that by the time the senior people are retiring, it won't matter anyway
2. Software becomes mostly unreadable and nobody really understands how it works, but the AI is good enough that this is ok
Both are hard for me to imagine right now, but if you'd asked me five years ago if AI would ever be good enough to commit to my codebase, I would have said, "I really doubt it". Yet here we are, AI code is sometimes better than handwritten code (depending on the person of course).
Would love to hear others thoughts on these as well.
AI systems look at code on the internet that was written by humans. This is smart, clean code. And they learn from it. What they produce — unreadable spaghetti code — is the maximum they can squeeze out of the best code written by humans.
In the near future, AI-generated code will flood the internet, and AI will start training on its own code. On the other hand, juniors will forget how to write good code.
And when these two factors come together in the near future, I honestly don’t know what will happen to the industry.
I agree, that AI generated code will really start to piss in the pool so to speak. I'm not sure the models will get better without a lot of hand curation and signals of what is good vs bad vs popular code. They emphatically are not the same.
We need to remember that the core of what “logic” is can be understood by every human mind, and that it’s our individual responsibility to endeavor to build this understanding, not delegate or hand-wave it. For all of human history, delegating/hand-waving away basic logic that can be understood by actuarial/engineering types has never gone well in the long term.
Even at the presidential level, today:
>RFK Jr claims basic math rules don’t apply to White House https://www.independent.co.uk/tv/news/rfk-jr-math-percentage...
We had a couple of decades of brilliant engineers working for faang. What did we get as a result? Just crap: twitter, instagram, youtube, facebook. Imagine all those brilliant minds working on something meaningful instead.
Same goes for LLMs
I've found experienced developers leverage AI as a force multiplier because they can scrutinize the output, unlike juniors who often just paste and move on. The real skill is becoming an AI orchestrator, prompting effectively, and critically validating the output. Otherwise, if you're just a wrapper for AI, then yes, you become the "legacy developer" you mention because you're adding no critical thinking or value.
I can't imagine the people using many agents in parallel are actually even checking the fitness of the output they are generating, let alone the design, structure and quality of the code itself.
There's always been the need to verify the code matches the business requirement, right? It used to be when you asked someone why they wrote the code the way they did, they'd tell you they thought it was the right way because X or Y. But with AI they can respond saying they actually don't know why they wrote it a certain way. That's just what ChatGPT or Claude told them to do. So, that's the nightmare part that people are experiencing.
Code reviews are important and software architecture skills are just as important now.
How do you get it to not add so much unnecessarily defensive code?
Nowadays, everybody is doing everything with AI, young and old alike. It's very hard to justify not doing it. That being said, you can produce good code with AI, if you know what it should look like and spend the time to prompt and iterate.
Everyone seriously doing it has a bunch of agents in a corporate like structure doing code reviews, the bad AI code is when someone is just using a single instance of Claude or Chat, but when you have 50 agents competing to write the best code from a single prompt, it hits differently.
PS: Compare Assembly with Python - for sure the ration is more then 10x. Still we need much more devs compared to early days. For me the question is what the future software dev looks like (if the job still exists).
Yes. The feature is quickly produced slop. Future LLMs will train on it too, getting even more sloppy. And "fresh out of uni juniors" and "outsourced my work to AI" seniors wont know any better.
Is this because the guys claiming success are working in popular, known, more limited areas like Javascript in web pages, and the people outside those, with more complex systems, don't get the same results ?
I also note that most of the "Don't code any more" guys have AI tools of their own to promote...
At the same time, I see the future being brighter with the help of these coding LLMs — I personally was not building software for years, focusing on management-like work. Serious coding during 'free time' was just too heavy to lift — you need time to sleep, eat and do some IRL things too...
Now, having experience in building software and caring of what I create and why I can do this far more quickly with LLMs and it kinda opens possibilities I could only dream of before. Like get a few spare $$$ millions and hire a team to build something before = pay $20 to Cursor/Claude and spend a few days guiding it like if it was a team of junior outsource devs: it's painful sometimes, but if you really know what you're doing and why — it works. And no one stops you from tweaking pixels when the majority of work is done — you'll even have a will to, as opposed to writing it all by yourself and spending all your mental energy on routine stuff.
So... if people learn to use this hammer properly — I suppose the future might be brighter than the past. And also those who actually care but didn't have time to do things they're passionate about now can do things on their own.
Nope because this is all I do and the AI doesn't do it right either