AI Abstractions
Today, we see the reality of "AI" being sprinkled everywhere. That's a magic icon, now has dozens of variations. I recall a time, not far away, when the "stories" were popping up everywhere. Hopefully, this moment will pass and move on to more interesting features that utilize proper AI abstractions. AI is increasingly blending with engineering, and that's likely how we will make it useful instead of a fad. We have seen things move very fast over the last two years. Specifically, we don't know if we will continue to evolve at the same pace, but we don't need perfect technology to create value and disruption. AI might look like and feel like magic when it works, but it is not magic. Artificial intelligence (AI) is disrupting and transforming the way we learn. We might not realize it, but learning has already changed and is yet to change much more.
So agents have an even higher level of abstractions than others, for instance, like engineering agents like Devin or Codex. You will also see a very simple and small.
MCP is a standard method for producing context for LLM models. It's like a USB, providing a universal way to connect LLMs with software that runs outside of them. Now, there is a frenzy to create MCPs for every single thing in the universe.
Agents can perform powerful tasks due to the numerous MCP servers. However, we may have agents orchestrating agents in the future. Not that I like all forms of orchestrators like (ESBs), but it is very likely to have agents orchestrating agents and agents doing integration with other agents.
Learning has changed over time
People learn through various methods and sources, but over time, we observe changes. Today, AI is a force driving some of these changes. Before the 90s, we basically had no internet. The primary method of learning is often through books, magazines, formal courses, and education.
When the internet emerged, we were still learning using the same methods, but in a digital format. New forms of learning also appeared through blogging and forums. As the internet evolved from Web 1.0 to Web 2.0, we discovered learning via social media and gamification in very popular tools like Stack Overflow (Stack Overflow looks dead today). Following Stack Overflow, we saw the rise of video courses, primarily offered through platforms like Coursera and Udemy. We are now in a new era, where LLMs are becoming the primary source of knowledge. People do not post questions in Stack Overflow anymore. Now we are learning to ask LLM questions, doing things using LLMs, and of course, verifying what LLMs tell us, because they still hallucinate (a lot). How we learn is changing; therefore, how we develop software is also changing.Programs are not created equal
Let's forget about AI for a moment. Before the advent of generative AI and LLMs, we had always relied on programs. However, not all programs are created equal. Some programs were more useful than others, and some programs were more complex than others.
IF we compare the cat utility we have in Linux and Unix with a web browser, we will see a vast difference in levels of abstraction. Both a cat and a web browser, such as Mozilla or Chrome, are programs, but they are different. IF we analyze a cat and a browser, we can see differences:- Levels of Abstractions
- Lines of Code (LoC)
- Features
- Complexity
- Cost to build
- Purpose
- Tech Stack
That might sound obvious, and it is, but perhaps because things are moving so fast, do we truly understand the difference when we talk about AI?
AI: What do you mean?
People use the term "AI" loosely goosy today(including this post). Like it's one thing, however, it is many things, and each one of those is a very different "thing".
Traditional AI has existed since the 50s. Agents, it's not a new concept. Lots of companies are "rebranding BOTs" as AI. Knowing the difference matters. The details matter. You would not give the cat program in linux/unix to someone who needs to see a movie. Because you know the difference, you know the right tool for the job. Considering AI, do we know the right tool for the job? Do we understand the right level of abstractions? Do we know how to create proper abstractions?Good Abstractions: Yes, they exist!
Good Abstractions are often hard to see inside the internal design of applications. Because internal design is a long-abandoned discipline. All the focus often is on getting things done, people barely spend time doing proper external architecture, and internal.
Unfortunately, we see more examples of poor abstractions out there because people lack proper design knowledge. That happens not because of the lack of UML usage, but because of the lack of know-how to do adequate design, thinking about appropriate design, and expending the time to review with a good architect.
However, you might think good abstractions are dead; they are not. We see good abstractions at both the macro and micro levels; the problem lies in the middle.
At the micro level, we see good abstractions in programming languages, operating system system calls (SYSCALLS), language SDKs, and even in some open-source libraries (not all libraries are good).
We also see good abstractions in products (not all products); look at a food delivery application like Uber Eats, DoorDash, or even iFood in Brazil. You press a button in your house, and food appears at your door; that's a great abstraction.
Hopefully, you see we can, in fact, create good abstractions in software. Now, the question is, do we know how to make good abstractions with AI? More specifically, talking about Generative AI.
AI: The Refactoring Killer
Pay close attention to software and products. Some products and software eventually die. However, before dying, they are in a sort of zombie mode for a long time. Where the product is alive and even generating revenue, but is not being actively curated, with no refactoring, modernization, or improvement to the current experiences. Such a mode is often referred to as "Maintenance mode."
I like to call it zombie mode because it's funny, and it also highlights the sad state of the software. Now, AI, specifically generative AI, LLMs, and agents, can change that.
Because if I can do more with less, even if we achieve 10-30% more productivity with engineering, now software and products that were previously stagnant could get some fresh air.
On one hand, it's not about productivity; it's very hard to measure productivity in engineering and digital products. So let's refer to it as "perception of productivity" and keep it loosey goosy.
Now, forget if the product is in a zombie state; it does not matter if it is in a zombie state. Think about complexity or a lot of technical debt. Paying technical debt is expensive, and companies often avoid it for economic reasons. Due to the advent of generative AI, it's possible to "FIX" some problems (like using duct tape), where we can get some things done, but without actually addressing the real root problem. What happens, then, is that AI is killing refactoring.
- Complex User Experience: Instead of refactoring the flow and pages to fix the UX and frontend code, we utilize an AI agent to complete the task; we leverage AI to "hide" the complexity, albeit at a cost. But we are killing refactoring.
- Technical Debt on the Backend: Instead of re-designing 3-5 services, we can just throw an AI agent that "orchestrates" the flow between such services, and instead of paying the expensive and long route of refactoring, we just work around the problem. Again, killing the refactoring.
- Another Layer of Indirection: In engineering, we say that all problems can be solved with another level of indirection. That's true because we never do the right and expensive thing, which is to re-design and refactor systems. We just add more things. Therefore we keep adding new levels of indirections, that's happening with AI right now, look MCP and Agents.
Think about this, how we will use AI, as a new duct tape to work around or as a way to introduce new capabilities that did not exist before? If we do even less refactoring, we are simply adding more complexity and making our lives even harder when maintaining systems.
Agents
Agents are a higher level of abstraction for generative AI solutions. Agents hold the ultimate marriage between AI and Engineering. However, not all agents are created equal.
Model Context Protocol (MCP)
MCP is an abstraction on top of an existing API or some software capability. Like a database, a file system, sending an email, or posting an article in WordPress.
Levels of AI Abstractions
Bring it all together, we need to start seeing the different levels of abstractions that are happening with generative AI solutions. That's an essential step for us to start creating our own and better abstractions using generative AI.
LLMs, as they are right now, are the "brain" and the first level of abstraction. Code assistant is how LLMs break out of just being a "chatbot" app and change how we do engineering every day.
With the advent of MCP servers, agents can be potent and perform more complex tasks for us; however, agents can also be straightforward and specific, contextual, and single-task, or they can be very generic and perform generic tasks, such as those used in engineering agents like Codex or Claude code.
We will likely have high levels of orchestration between agents and agents orchestrating agents, like we saw with ESBs in the past.
Could more abstractions be created? Well, it's too soon to say. Still, some people believe one company's intelligence could be talking to another company's "intelligence", that will really depend on the cost of LLMs(which is still getting higher and higher) and how much CONTROL and UNPREDICTABILITY we are willing to tolerate. One thing I believe, which I observed over time, is that humans have a hard time changing how they organize; we are still organized with industrial structures. Let's see.
The path forward
We have a lot of things to learn about designing products and software using AI(generative AI). We must learn how to properly build products using AI as a means to add new capabilities, not as a form of cheap refactoring or a killer. We must understand the new forms of abstractions that AI can create and how to use them effectively, as well as when not to use them at all. We have a lot to learn.
Cheers,
Diego Pacheco