Many years ago, I watched a talk about tech debt. The presenter argued about what counts as a tech debt and included any third-party dependency to the mix. His argument was the following - tech debt is created when we cut corners to deliver faster. If you adopt a library - a piece of code what you could write yourself, you do so to save time. You do so at the cost of a potential future liability. You often use open-source libraries where vulnerabilities get discovered and exploited. The maintainer might abandon the project, or they might do a licencing rug pull if they feel like everybody is using their work, and not giving them a dime.
I argue that AI is no different in this regard. It lets you rapidly "develop" code to meet business demands. However, there are fundamental limits to the current AI technology, limits that are very hard to circumvent, unless the technology itself changes:
- data set poisoning - you cannot prove that the model you are using wasn't poisoned through a supply-chain attack on the learning infrastructure. The model can be trained to react to particular kinds of prompts to install backdoors or vulnerabilities into your code. Unless you know how to implement stuff properly, you have no way of validating the code.
- lack of negative feedback loop - if your current business requirements contradict the previous business requirements, AI will happily oblige and silently break some edge case. Ask yourself - how many times have you tried to implement a feature, only to find out that trade-offs need to be made, or to find out that the requirements were poorly specified? AI won't tell you, or at the very least it won't tell you unless you ask. They only way to be sure is to implement the thing yourself.
- inability to be consistent - somewhere I read that it is a good thing to first instruct the bot to formulate a plan of implementation, wait for approval, then carry on with the implementation. Claude Opus managed to follow this instruction for the first two requests and completely ignored it for the third time. Internet is full of stories where the bot suddently went haywire and went against the requirements in the prompt.
- inability to be original - you tell AI to implement something. Do you believe it did it in the most safe and/or performant way possible? Think again. It wasn't trained on the "best practice" corpus (such thing doesn't exist), it was trained on the "common practice" one. It will give you mediocre, average outputs, not the great ones. Unless you know how to do it correctly, you have no way of telling.
They say that the definition of insanity is doing the same thing over
and over and expecting a different result. In that case, AI is
insanity.
All the techbros, CEOs, venture capitalists, opportunists, and influencers are trying to convince you that if you don't use AI, you're useless, antiquated, slow.
Sure, not using AI at all would be an extreme overreaction. AI can supercharge learning, analysis, and prototyping, if (emphasis on if) it is used responsibly. But many people don't use it responsibly. Agent AI which can do stuff "without any supervision" can also screw up anything.
What I am trying to say is - If you don't trust your junior devs and do code reviews after them, why would you blindly trusted something a machine randomly created?
If you are a developer
Use AI responsibly. I for example either use it to generate throwaway code, or I always manually refactor the code myself, just to force me to think about every single line. Just today, I was touching some CI pipelines and encountered a very long condition line in the YAML file.
I asked the bot if the line can be somehow pretty formatted. It obliged and used this weird >- operator. I asked back what it does. I asked for sources, and I went to those sources to study the operator. I don't want to be a passive prompter, I want to understand the code. Next time, I don't have to waste my time prompting the AI, I can just format it myself..
Generating scaffolding code is on the edge. I bet that you could write a deterministic tool that could do the scaffolding equally well, without fearing the bot just randomly screwing you over. If you let AI write the tests - do you read and reason about every single test case? Do you refactor the code so it doesn't just vomit tokens that would inflate your bills the next time you want to add a test case?
But for the love of god, if you let the bot generate production code and you don't even know what it produced - you are just tying a gun to your dick and giving somebody else the ability to pull the trigger.
If you are a manager
I know that most CEOs are just jacking off stakeholders and don't really care about the people they employ, but if you are a middle-to-bottom manager, you should care about your people. You should care about what drives them. And the reality is that most of those nerds are coding because they love the act of coding and they hate delegating the work and explaining the business requirements. So why would you force them into doing exactly that with AI? You're just going to kill their motivation.
And for those who are diving head first into the AI-driven submarine, keep an eye on them. Us humans are a very lazy bunch. The moment we are allowed to stop thinking for ourselves is the moment we would sell our souls to the devil. Why do you think there are so many autocracies around the world? People are lazy. People like when somebody else thinks for them. People only want change when the emperor makes their lives miserable.
I honestly fear that people who unapologetically like to use AI and see agentic development as something good will grow dumb. There are already studies out there saying that cognitive functions deteriorate when you rely on AI (and most of the facilities on the modern internet).
Also, do hire juniors
Anybody who refuses to hire junior developers because AI can replace them is a retard. Sorry, not sorry. Experienced developers don't grow on trees. They grow on battlefields. They learned by writing code. They learned by negotiating with customers, and product managers, and CTOs. If you refuse to hire juniors and teach them, you are creating a tech debt worse than AI-generated code. You are contributing to the lack of experienced developers down the line and I hope your business will bankrupt hard (I just pity people who will stick with you up until that point).
Also - maybe consider forbidding your juniors from using AI to generate code. Learning? Okay. Understanding your code base? Great. Writing boring documentation? Abso-fucking-lutely. But no code generation. They need to experience it for themselves. They need to learn why patterns exists, how to refactor code and how to notice code smells. If they don't learn it the hard way, they won't be able to direct anybody, let alone an agentic chatbot.
Feel free to tell me how backward-thinking loser I am down in the comments below.