DevOps Monolith

DevOps Monolith


The term Monolith it's often used in legacy systems or system poorly designed and poorly architected.  Microservices are the facto stand architecture when we talk about software. There are lots of companies also doing DevOps Engineering and DevOps per se its related to microservices. Microservices required some infrastructure work like for instance:
  • Provisioning: Install OS system packages, files, scripts.
  • Telemetry: Dashboards and alerts for you microservice eco-system.
  • Testing: Stress Testing, Chaos Testing, Load Testing, etc...
  • Canary: Automated canary analysis, deploy, score and rollback.


Infrastructure work could be more or less depending on your level of abstractions for instance if you are working with bare metal or IaaS you will definitely have more things to get done. However, if you are doing cloud-native microservices using kubernetes or any FaaS stack you might have less work but always well have some infrastructure work for someone to do.




DevOps is a really interesting movement. There are lots of folks talking about cultural aspects like Blameless incident reviews, CAMS and Lean. This is important aspect don't get me wrong.  However, there are some engineering aspects around DevOps Engineering that need more care like avoid coupling for instance.


Infrastructure is Software


Lots of infrastructure work are basically API calls. Create a machine is an API call. Define a network(SDN) it's a set of API calls, creating load-balancer is an API call. Lots of infrastructure work today are resume as API calls in other words software. So in order to have software we need the code. That's the easy part. However, Engineering is more than just code. It's also about testing and equally or more important architecture aspects.


There are great tools on the market today to do provisioning like Ansible and Terraform for instance. Both solutions work in a declarative fashion where you describe the state you want in the machine for instance. Both solutions have ways to share common infrastructure code.


Every single engineer what's to avoid duplication because duplication often means more bugs and less productivity. So these tools have ways for you to share this code on multiple machines. So let's say you will do microservices with Netty for instance. You will have to do some OS optimization in sense of TCP and Open Files for instance. So you want to share that. Ansible allows you to create a role in order to share that with several other infrastructure components. That sharing is not only about configs but installations. So let's say pretty much all machines will need to have same telemetry configurations or share Tomcat app server. Why provisioning same thing all the time right - let's just share it right?


The DevOps Monolith


The DevOps monolith enters the room. It all started as a good idea and then turn engineering life into a nightmare, I see that a couple of times already, so what problems are created when we see reuse as a blind thing.


Yes, the monolith is not only about Application but today there are Infrastructure Monolith or DevOps Monoliths. As Infrastructure got great things about code it also gets some of the most painful issues as well.


Before talking about some not so nice things, let's talk about good things, so what are nice wins you get when you share infrastructure code:
  • Standardization: You know what directories to look, where scripts are and how things work so this is definitely a big win and reduce engineering cognitive load for sure.
  • Productivity: Less work to do at the end of the day. It's faster to investigate things. Required less documentation since things are in standard place.


DevOps monolith consequences:


  • Build Time: I see scenarios where provisioning could take up to 40 minutes. Yes 40 minutes to build an AMI. This kills productivity.
  • Coupling: Hard to change something just for your microservices without affecting others. Maybe you couple some Os dependency and it will be hard for others to update.
  • Side Effects: That's another experience I had when using the same roles for microservices in things that are not microservices like NoSQL databases. They need different configurations, tuning and no necessary same libs. So it's hard to predict what's happening and also a source of bugs.
Back to Design and Trade-offs


There is no easy solution for this. My experience showed me using shared software blindly is a bad idea but also rewritten everything does not work as well. DevOps monolith is dangerous because it can kill all main microservices benefits like speed to deliver software and ability to work with different things(different times to update libs and dependencies).


Share nothing for a small business looks like a overkill to me however share blindly on medium / big company it's also a source of lots of painful issues and source of many many Technical Debts.


There are trade-offs that need to be take into account. This is no different than them issue we have with software. As we learned that Software architecture is best with isolation where a services does not share: OS, Database and Code. We might need start introducing this isolation ideas on DevOps Engineering as well.  


Cheers,

Diego Pacheco

Popular posts from this blog

Running Multi-Nodes Akka Cluster on Docker

Go and Redis running on Kubernetes with Minukube

BPM com BizAgi Process Modeler