Sidecar Pattern

There are several architectural patterns we can follow when we need to distribute a shared piece of software. My favorite and default pattern is Service Pattern. However, there are other patterns for software distribution.  If we think about startups for instance there is a popular tool called Lean business Canvas where you model the business model of your startup and there are many dimensions there but one particular dimension called "Distribution Channels" is highly related to the subject of this blog post. We, as engineers often just think about one dimension with is either a Service or Binary shared library.  However, there are other dimensions we could explore. The other options are Tooling, Internal Services / Self-Service Platforms, Runtime Platforms, and Service Mesh. So for this blog post, I want to cover a little bit more the Sidecar Pattern. 




Old Idea in a new reality

For Linux we always had very small, dedicated programs that run in background waiting to react on specific tasks often called "daemons". For me, the demons might be the very first sidecars. So you might be wondering why dont we all right daemons and be happy? Well, written code in C and follow the Linux patterns won't give you much nowadays because there is a different reality. The reality is that we are running software in specialized space with even specialized hardware. So mostly(who is on the cloud which is 30% of the market) people are running workloads at the cloud run the software with a compute solution, often virtualized like (EC2 or Google Cloud Compute). However, the trend and reality are to run some workloads in Containers(Docker, Kubernetes, RKT, OCI/OCR, etc..) or Even MicroVMs / Serverless. So you have a specialized Kernel / Os for your software or even specialized hardware. So Sidecar means let's run a co-process there saying you will have 2 processes: Your App and the Sidecar. 

Sidecars are not the answer to all kinds of problems. Also, they are not tight to Kubernetes Only. Of course, Kubernetes makes it so much easier to use sidecars since in a POD a sidecar is just another Docker container.  However, you can run sidecars in VMs. Like Netflix is doing for so many years with Priam, Dynomite Manager, Raigad, and lately, now Lyft is doing with Envoy

Sidecar Architecture












In order to have a sidecar, you can have any architecture "inside" your sidecar. You can be RPC, Event-Driven, IPC, or whatever you want as long as you are DECOUPLED from the Application Runtime path. Sidecars can be used with Thin Client pattern as well and thats fine but keep in mind that what defines a sidecar is that running in another process, therefore not embedded in your application. 

Sidecars are not embedded but they also are not remote. Sidecars are not microservices, not Services at all. However, sidecars can have HTTP / REST interfaces and can be called by Application via "sudo remote calls". I mean sudo because for the application will look like a remote call BUT because the sidecar is running on the side machine as the Application the latency is much smaller. 

Sidecar Benefits 

There are several benefits of the Sidecar Architectural Pattern like:
* Isolation: You are not in the same "runtime process space" as your application.
* Decoupling: You are not coupled with application choices and vice-versa. 
* Provide Safe Reuse: Meaning no binary coupling (Issue with shared jars and frameworks).
* Encapsulation: We can have encapsulated generic code that can be shipped to all services/apps. 
* Avoiding Binary Coupling: For me, this is a huge win because at scale binary coupling is painful. 
* Technology Diversity: Fredon to chose any language, stack, lib or framework.
* Updates / Deploys independence: They can happen apart for the application CI/CD lifecycle. 
* NO SPOF: Since you would have 1 sidecar per service/microservice we dont have a SPOF. 
* Avoid Massive Migrations: Since the application code is not coupled with the sidecar, there are no massive code migrations. 
 
Finally, I just want to point out that Sidecars works perfectly with Containers/Kubernetes because in k8s for instance the deployment unit is a POD. PODs are just group of containers that share same resources such as network interface and disk. Which makes creating sidecars so easy, since is just another docker container. 

However like I said before, you can have sidecars without containers. It will require a bit of more deployment effort for you but is not impossible or hard at all.  

Sidecar Drawbacks

Like everything in software architecture, there are tradeoffs. So sidecars have drawbacks for sure. Let's take a look in some of them: 
 * IF you are not in K8s, More deployment complexity(EC2)
 * Great Observability is required or sidecars become black boxes.
 * Hard to Debug for application: Let's say sidecar is written in RUST and app is on Java or Go.  
 * Reliability Path: Sidecars now become part of the reliability path. Meaning Downtime could be a big issue.

Sidecars have a narrow set of use cases, you won't be used to replace the Service in SOA for your business applications for instance.  However, there are some set of problems that make Sidecars great for like Proxy, Routing, Observability, Auditing, Security, etc... However, if you can make a proper shared library being lean and minding dependencies they could be a much more simple and effective solution. 

IMHO having a high technology diversity like Go, Rust, Java, Python, C++ would definitely just using sidecars. But if you are massively in one language lest say Java for instance you need to make sure that is the right use case. Different languages and different libs have different strengths so analyze the tradeoffs carefully because moving away from the default language could be an issue. 

The default language(let's say Java for instance) often means you have people trained to deploy, monitor, tuneup and deal with complex workloads at scale or cloud, so introducing another language might mean you would be alone or would need to build that capability in the house which is beyond pure technology decision IMHO, meaning there is more. 

When to use sidecars

A Good indicator is having a cross-cutting concern that is not your business logic like Observability, Security, Performance, Reliability, Auditing, and Infrastructure/Operations. Another point of consideration is that you might want to avoid binary coupling with the applications. Also, let's say you really would gain performance or you have a tool in other languages that is much more suited for the job or much faster or simple. 

If your use case FITs in sense of requirements and architectural goals on the things I described above them I think you should really consider sidecar as your solution. 

When to Avoid sidecars

If you need extreme performance and super-low latency, definitely making a lib would be faster than a sidecar because you would be running embedded on the application. You need to analyze these tradeoffs carefully since there are other things you could to make your sidecar efficient like Using IPC, Reactive Programing, performant languages like Rust. Another criterion to avoid sidecars could be the fact that the final solution is not that complicated or the infrastructure price of sidecars dont pay off(assuming you're not running on K8s but Ec2).  

If some reason you need to scale the sidecar apart of the application, using a sidecar might not be a great idea. However, you always can use other external components to have scalabilities like Databases and Caches but you still need to think about this. Especially if the workload happens mainly on the sidecar. 

Sidecar is a very interesting architecture pattern we can use in Cloud-native applications. However, we need to resit to make "sidecar all the things" otherwise we won't have benefits. Remember there is no one-size-fits-all. 

Cheers,
Diego Pacheco



Popular posts from this blog

Kafka Streams with Java 15

Rust and Java Interoperability

HMAC in Java