De-Risking
PMI has a whole discipline about risk management. The DevOps movement has several principles to reduce operational risk, such as continuous deployment, infrastructure-as-code, progressive rollout patterns, traffic splitting, and more. Financial institutions might terminate or restrict business relationships with clients and even categories of clients to eliminate risk, hence derisking. Risk management was very popular in the 90s and even 2000s, but it's not dead, IMHO. Nobody talks about risks, nobody even monitors risks. I don't know why that happened or even if it's true for other industries besides technology.
The DevOps movement teaches us many things, one of which is post-mortem or blameless incident reviews. However, incident reviews are great practices if done right, meaning having engineers on the call and actually driving lessons learned for real.
Besides that, the problem is that we are having after the fact, which only prevents feature problems if we do our homework; if not, it does not even prevent future problems. It would be great if we could prevent problems before they happen. To the same degree, it's impossible to fix all problems before they happen, but there are lots of things that can be fixed with a little more imagination, creativity, and scenario-playing. Some companies and some people do the practice of "Pre-Mortems" and apply that before projects start, they think all can go wrong, all ways things can fail before they happen.
De-risking must be glued with negotiation. That's how it gets more useful.
Now, before I continue, I need to say that this is not about being waterfall. I'm not in favor of spec-driven development. It's very easy to take what I'm saying in the wrong context and think I'm praising waterfall, which I'm not. SDD implies that you have all the answers, and that is arrogant and wrong. You need the messy middle where you try to see how it goes, try again, figure out, and learn.
We also need to understand that products should fix real problems, SDD implies that we have these answers and they are just a way to tell what we need to AI, and therefore AI will get it perfectly right? Well, for me, SDD is pure risk.
AI is at risk, too. People see AI as:
- 1. The Revolution of Machines
- 2. We don't need engineers anymore; anyone can be an engineer
- 3. We don't need to ever learn anything even close to code, just get your prompt right
- 4. Code Review is the bottleneck, find another way to do that...
- 5. In 1 years the LLM models will create X,Y,Z, in 1 year LLMs will write all the code...
- 6. AI only will get better, it's exponential, I don't believe in diminishing returns because AI is magic...
Just to be clear I disagree with all this #6 items. But let me, for real, a lot of people think this way...
Let me say one more thing. AI will write all the code for one reason: because Claude code is an IDE, Claude code is what people use 100% of the time all day long, every day, all day, not because models are perfect (because they are not), but again, start seeing Claude code as an IDE. IF I say to you IntelliJ IDEA, VSCode, or NetBeans it would write 100% of the code, you will say no way, they are just IDEs. Claude Code blurs the lines, but IMHO it's still an IDE, so if it writes 100% of the code, it means nothing. 100% of the code was written using VS Code before, and IntelliJ if you do Java, who cares? How much % of the code is written by AI is the wrong question...
What I care about and what we should care about is:
- Do we have better products? Do we make users happier?
- Do we have better software? Do we have better quality (not QA)?
- Do we have better coverage and test diversity?
- Do we have better systems and less technical debt?
- Are we happier and producing more value?
- Do we add value faster in a sustainable way?
AI is also a risk, and risk is big big big time:
- Risk #1: Destroy the whole field like engineering: right now, maybe believe the technology field is in a depression. How we will deal with Juniors - the risk is that we have fewer and fewer juniors, so we will struggle to get engineers in 5-10 years. How will people develop skills? still tends to decay if we just prompt non-stop.
- Risk #2 Bugs and Incidents will rise: Since people are vibe coding like there is no tomorrow, the risk is that we make software worse than it is. We are living in a software quality crisis already; in 5-10 years, the crisis can be even worse. We will be flooded in bugs, thats already happened...
- Risk #3 Are we loosing talent: because now we think we don't need to hire people or we think mentoring people is waste of time. Again, in 5-10 years, it will be much harder to retain talent if AI makes us think people are a commodity.
IF you never valued engineering, now is big time for you.
However, we need to distinguish signal from noise, hype from value; otherwise, all these risks will charge us big time too. IMHO, the de-risk AI playbook is this:
- Never stop hiring juniors, instead double down.
- Teams want to get rid of people who are not seniors, people are allergic to mid-level and junior engineers, that must change because we will get worse people in 5-10 years if the vibe-code/sdd stay around...
- Focus on AI to make your systems better not worst, make sure AI helps you to:
- Use AI to prototype and learn not to call it done fast. Learning is about cycles, not about one-shot production deploy (that's waterfall and sucks).
- Add more testing coverage, adding more testing diversity
- Add more observability
- Create different flavors of solutions and pick the winner
- Do more refactoring and have less technical debt
- Stretch yourself and do things you would not do in the past, or would not have the time to do, but do it right, read the code, have tests, and have proper engineering practices in place.
- Make sure you use AI where it is safe
- Don't stop learning, don't stop acquiring skills, don't use AI for everything. Sometimes AI is better just for Input ant not for output.
- IF you had a genie how do you know you are asking a good wish? You know you suck, asking if your first wish is to have 1000x more wishes. Well, that tells us something about your priority game and strategic and systems thinking. Wishes with AI are very easy to get trapped in execution, and ask AI to do the wrong thing since it can do "so fast". Instead think if the action is the right one. For example, some systems and libraries should be decommissioned or rewritten, not just patched with AI.
- Instead of making everything permanent, embrace experiments, experiment with things and see how they go, and if you like it, keep them; otherwise, toss them away. Our industry knows very little about AI Engineering with agents, what? How is this 2 years MAX? We should be open to change, but experimentation with caution is a great way to go.
- Make sure we add all proper guardrails to make AI less destructive like:
- Make sure AI coding agents write tests
- Make sure AI coding agents have proper observability (which is not just logs)
- Make sure AI coding agents have good hooks that trigger linters, test suites and call systems that have good code as policy in place. i.e k8s deployment or terraform apply.
- Make sure there is a constant retrofit from learning to your working process.
- Keep learning, keep doing pocs, keep experimenting
- Make sure you change how you think and what you believe, otherwise you are changing nothing, and the AI just be a tool and therefore will be much less effective. Now, here we need a lot of good judgment because there is value, but also there is a lot of hype.
IF everything is an experiment, failure is basically learning. The actual failure is de-risked by the way you work. In Agile, we have this thing called "Fail Fast," where you spend a sprint trying something, and if you fail, you just lost 1 week of work. Again, if everything is an experiment, we can learn and de-risk before committing to permanent changes for everyone.
Going back to negotiation, before you start something, you are in a position to do something that in the middle of the road is much harder or harder not practical. Once you are executing, the expectation is to get it done, and it is hard to fundamentally rethink, and you are doing it because it is a risk not to meet the expectations. However, again, before you start, you can change things more easily. It's not a given but is more possible. We need to take more advantage of this.
Cheers,
Diego Pacheco
