For a long time, software engineering had a stable way of doing software. Yes, not all companies had the same level of maturity and properly digested previous movements like Lean, Agile, and DevOps. However, right now we are living in a quite interesting time when we are reconsidering what we know and what worked in the past in favor of perhaps a better way to build software. Fully acknowledge that we don't have the answer to what the winner is yet. Many industries are being disrupted by AI, but no industry is being more disrupted than technology itself. We never saw anything like what we are seeing right now. The only way to go forward right now is to experiment, read, socialize, and reflect on what works and what does not work.
Faster Than Light (FTL)
If you take a look at my book,
The Art of Sense: A Philosophy of Modern AI, I wrote about this illusion and seduction of FTL. The issue with FTL is that you can think that fast, but you can't keep up. When every single living soul in your company is using AI like crazy, how can you make sense of that fast?
It's so easy to get addicted to Claude's code because it's pure dopamine. You can clap your fingers and boom, you have something working (until it doesn't because you properly test and find hundreds of bugs). It's so easy to keep doing the old things we always do. We need to stop and think if what we are doing today still makes sense.
It's so easy to do what we always did, but now that we have a powerful tool, perhaps we need to stop and reflect if the old ways make sense. That happens, IMHO, for a couple of reasons: one is that we are used to doing things the same way for a long time. Two: because we often dont bake time for retrospectives and reflection, we often optimize for delivery, not for learning. Third, because some of these changes are not obvious. Like my post on the
AI Shift left, where you push code review to the local Claude installation.
We need retrospectives more than ever

Retrospectives were important before 2023, and they are even more important after 2023. Reflections are mechanisms for us to think, reflect, and digest what we are doing. It might sound like a waste of time because we are not shipping while we are thinking, but believe me, we need to think.
Lean/Agile are all about learning; you can't learn if you just move with FTL. Actually, what we will be doing is piling up a bunch of technical stuff we've never seen before(or maybe we did), and future problems for us to handle. We need to remember that speed only matter is the direction is right. IF the direction is wrong, speed is poison. Now we need to ask: Are we going in the right direction?
2023 A vintage year: or the year that I wish never ended.
2023 was a special year in my heart for one reason. Was the year that AI did not take over. After 2024, and especially 2025 and onwards, AI took over. Back in 2023, we tought very, very differently. You would be called crazy, unprofessional if in 2023 you said:
- Do not look at the code
- Your IDE will be called Claude
- Now Grandpa can produce software
- Migrations can be done much faster, maybe 5-10x
- Product and Design can produce code
- Engineers can do product and design
- The bottleneck is ideas
- Code Review will be Dead
How fast can one re-think their beliefs? Well, I think we need training more than ever. We need to train people more than ever because AI is not just AI; it's a fundamental change in all aspects of engineering, from discovery to implementation, security, risks, trade-offs, everything changes now. Do we change or we stay the same? We need training more than ever!
What is an Agent?
Now the heart of the storm is the AI agents. What it's a agent? The problem starts like that, agent is a loosey-goosey term that means all sorts of things, so we could be talking about a huge universe of different things. For instance:
An agent can be as simple as a Markdown file we drop into a folder, so Claude's code can fix its
context-root problem and operate more efficiently. So you can drop a code-review agent in order to do
AI shift left. An agent can also be an industrial-scale complex system, running on an
agent core with
LiteLLM or even
OpenRouter, providing observability, auditing, scalability, failover across multiple LLM providers, and much more.
We also can use agents in a very COLD and industrial, boring, lack of color, gray dominated way like modern architecture that is boring and soulless or we could use agents like chat bots where we can talk to talk like we would talk to humans and back several back and forth with in the vintage year of 2023 was called: Iterative Software development of for short Agile. How can we build software without feeling it? Isn't it arrogant (and therefore waterfall) assume we have all the answers and we just need to execute?
What if we dont understand the users, what if we don't fully know what values mean, what if the software is not what it should be it, and we need multiple cycles of build and feel, and by feeling, it means we care, and we craft meaning and vision into software that users love immediately.
Gray style software, it's about prompt obsession. I don't believe I'm prompt because it reduces heart-driven, build, taste, feel, and software discovery process. A single prompt is cold and arrogant like a requirement (which is a lie) thats why I don't believe in prompt requests. Because we are forgetting about the many loops and the discovery as we go (agile, in other words).
BTW, the same genie (as Kent back calls LLM / AI Coding Agents), is the same that all companies have, you differentiate if all are doing the same? Perhaps we need to do things differently. Perhaps we need punk rock.
Not all Agents are created equally
This is not perfect. And I don't think this means these are all viable or realistic options, but it's an attempt to make sense of the agents' landscape. I'm trying to classify them into 2 categories, one based on how much they are autonomous, meaning how much they work alone, or how much they are "semi" and need a human in the loop, or adulting across several loops. The second dimension is IF: either one agent or if there is a lot of orchestration, which I named Multi-agent.
There are different levels of maturity across agents. Gas Town is the kubernetes of agents, perhaps too complex and too expensive for most of us. Claude Code creator Boris Cherny has a
very simple and vanilla setup, and perhaps simplicity is the ultimate sophistication, as Da Vinci once said.
There are a couple of styles of multi-agent systems popping up, like:
* Focus on Brute Force: Like Ralph Loops.
* Complex Orchestration Self-Regulating systems: Like
Gas Town
Paddo wrote an amazing analysis on both sides of this spectrum.
IF we want more structure
I asked Gemini 3 and nano banana pro take a shot in my classification system - I got this:
We are evolving from code assistance to multi-agent systems and very fast. Now we need to make sense of all these options and see what the signal is and what is noise.
Workflows
Now we arrive at workflows. Back to 2023, the vintage year, when we coded and operated very differently from code assistants like GitHub Copilot, Cursor, and Claude code (the beginning of agents). Each evolution reduces and reshapes the engineer's role.
From an AI Baby sitter to an Architect (Claude Code) to maybe a CEO (Gas Town). Perhaps in the near future, we need huge teams, small teams with highly capable and multi-domain persons, who can do amazing things and scale themselves with agents.
We know things were working by 2023, a.k.a vintage coding. We know things work with Copilot, we are learning if Claude code will work in a scalable way, but we are not ready and done with that yet, I dont think we ever had the time to digest, and we are already pushed into more complex multi-agent systems.
We need time to think, digest, reflect, and see the "bad effects" of agents and multi-agent systems. We are still on the honeymoon; we cannot say this is solved and understood / proof way of work. There is a big promise with Claude's code, but only if we can make it sustainable.
Learning By Experiments
From the wisdom of Gary Vaynerchuk: "In the time of the Jetsons, behave like the Simpsons". Maybe not like Raph Gary :-). Human relations matter more than ever. AI could be here, but we still humans and who buy software is not AI, it is still humans.
How I think we should move forward:
- Be Open to forgetting all you know, re-think all the "standard" way you work.
- Don't assume it's easy, don't fall into traps like: OH, Claude, Code generated the code, it's done.
- Remember it's all about Learning, we need retrospection, we need to talk and digest things.
- Producing code in FTL does not mean learning at FTL
- Let's not forget about the user who does not care about AI and cares about value.
- Let's rethink how software engineering should look, let's explore and experiment with different workflows to see what works and what doesn't work.
- Experimentation is a great way to go; we can't assume we have all the answers.
- AI Shift Left, it's a great start
- Let's remember that technical Debt and security mistakes are silent, and we might discover when it's too late.
- Direction is more important than speed; speed in the wrong direction is poison.
- Productivity gains only mean anything if they are end-to-end and we can reduce waste and build better products.
Cheers,
Diego Pacheco