AI Agents and Distributed Monoliths

Distributed Monoliths are a significant source of technical debt and a major anti-pattern. Distributed monoliths have the worst of both monoliths and Microservices. It's very easy to create distributed monoliths because often people don't learn proper principles, and mistakes keep happening over and over again. Distributed Monoliths are everywhere: on DevOps solutions, on the Frontend, even in Data. Why? Because we still don't add proper isolation in systems. Now Distributed monoliths will happen with AI and Agents or "Agentic" solutions. Let's understand what's going on and how we can protect ourselves from another disaster.

What is an Agent, really?

Have you ever tought about what an Agent is? For sure, it's not a frontend application. Of course, we can have a Chat on the frontend or even Generative UI. But fundamentally, the Agent will be on the backend. Since AI and Agents, or even Agentic, are overloaded terms, we need to define them. Imagine you are creating or already creating agents in your company. 

ChatGPT, Gemini, and Claude are not open-source applications; you cannot download the source code and add agents directly in the UI. For sure, you have integrations like with Office/Google Docs, and many move via MCP, but you won't be able to customize 100% of these apps, because they are proprietary and you don't have the source code.

People also confuse agents with productivity. You can use Claude Code you create custom agents. You can also have custom or create your own MCP and integrate with Claude code. For this matter, the agent will run on your machine, very likely communicating with Claude code via standard input/output(in other words, your terminal). 

Now, you need to think about creating agents are "features" in your solutions. You won't deploy Claude code to the cloud and serve requests from there; it doesn't work like that. Something with Microsoft Github Co-pilot, you won't install it in your cloud and serve requests from there; it does not work like that.

Based on everything I said here, when creating your AI solutions, your agents will be backend applications. Having said that, we can run backend applications out of thin air, but we need a medium.

I promised a definition. Agents have characteristics like:

  • Autonomous with perception of the environment, make decisions, and get things done.
  • Feedback Loops: based on events or outcomes, adjust and do interactive problem solving
  • Have access to tools(web search, create files, read files, etc..)
  • Some level of "reasoning" and lightweight planning

Agents might have a human in the loop or might be 100% autonomous. Do not confuse Agents with Agentic, which is a behavior or adjective of some tools that are agent-like but not real agents. 

Agents Need a Medium

Agents require a medium; they are not deployment units per se. The one that does not deploy agents on the cloud does not work like that. 

Your users will interact with Mobile applications or websites/SPAs. However, your agent will run on the backend, either in Lambda (serverless). Your agent could also be a Service, like we always did with SOA. Your agent could be a library, which is what we see all the time with Claude code: people install MCPs (libraries) via npm and just run console applications on their machine. 

HTTP vs Standard IN/Out

We see communication with agents happening in 2 ways. One way is via standard in/out, where the agent is a console application that is called with parameters and outputs results that are sent back to the LLM model. There are other forms of communication, such as HTTP, which we see a lot with remote MCP. 


Standard In/Out is not a good communication mechanism. It works, but it's not secure. HTTP is much better, because it's safe (HTTPS) and we know how to handle security properly, thats how we have been building services for decades. Avoid local MCP and standard In/Out; favor remote MCP, which does not need to be outside your company. Please add a REST interface with a proper contract in front of your capabilities.

Agents and Internal Shared Libraries

Agents can use libraries, since they are software, and therefore have code. However, people can create internal shared libraries to reuse code across agents.

This is a bad idea for many reasons. First, because this is the first form in which you will create a distributed monolith. Yes, AI can migrate code much faster than humans, but you still do not want to make another distributed monolith. The second reason this is bad is that now agents are coupled with agent-commons-lib, and let's say some heavy framework was used there, then you will have even more problems. Because agents are software, they will have vulnerabilities and need to be up to date and migrated from time to time.

Avoid creating internal shared libraries for agents. Avoid binary coupling with agents.

Agents and Distributed Monoliths

MCP is a funny thing. There are MCPs for all databases out there. You can get access to your data via MCP. I think that is a huge mistake.  

Now you will have several agents accessing multiple data sources directly, such as Postgres, Redis, Aurora MySQL, and all your databases. This is a horrible idea. This is how you will make another massive distributed monolith. 

You can use MCP, but avoid accessing databases directly. 

Agents and SOA

Always have contracts. Always have APIs in front of your databases. Thats the SOA approach. 

Having proper APIs and Contracts allows us to avoid distributed monoliths and binary coupling. LLMs can still access data, but only via APIs. If you have your contract in a wiki documentation or in a Swagger/open api document, you can easily create a Claude skill or a simple driver that can access your api.

LLMs are orchestrators by nature. In the past, we had ESB, which was an awful thing. However, today you can have an LLM model as an orchestrator and make several API calls to achieve goals and tasks that need to be done. We can build agents and have a great solution using AI. We can also use SOA and contract-first, and ensure proper isolation to avoid creating another distributed monolith with generative AI and Agents.

Cheers,

Diego Pacheco

Popular posts from this blog

Cool Retro Terminal

C Unit Testing with Check

Having fun with Zig Language