Netflix Dynomite/Dyno: The Cluster for Redis

Dynomite is brilliant. Kudos for NetflixOSS team because it kicks ass. First all they mixed several interesting, battle tested and sexy architectural ideas and deliver into a single solution. What would be Dynomite? You can think as a kick Ass Cluster for Memcached and Redis. But its way more than that. Dynomite is integrated with the Netflix Stack so you can use with Eureka and the rest of the stack. You dont need use Redis or Memcached if you dont want because Dynomite is modular so you can use the NoSQL or thing behind it.

Dynomite is based on the Amazon Dynamo paper, so it implements the Consistence Hashing Ring, with quorum-like mechanisms, so you can have strong consistency and dont loss data(similar to Cassandra and Riak) and also have some low latency and high throughput using Redis or Memcached Behind. Dynomite is written in C and its a proxy, it uses the twitter twemproxy as base solution. Replication is a aymetric, dynomite has a java client called Dyno with has Token Aware load balancing. On the consistency side you can do: DC_ONE: Sync same AZ, Async other Region or DC_QUORUN: Sync to the mun of the quorum.
Performance is amazing, check this benchmarks by Netflix folks. They used a R3.Xlarge instance with replication factor set to 3 in 3 amazon zones, they used in front of Redis and did some set of GET and SET operations. The ratio between reads and writes was 80% reads and 20% writes(pretty much Netflix scenario)

Dyno Client Features

One of the great things about the client(Dyno) is that you can choose the client you want use, so for redis you can use Jedis or Redisson for instance but since dyno is modular you can code to integrated other clients if you like it more. Some key features in dyno are:

    * Connection pooling of persistent connections - this helps reduce connection churn on the Dynomite server with client connection reuse.
    * Topology aware load balancing (Token Aware) for avoiding any intermediate hops to a Dynomite coordinator node that is not the owner of the specified data.
    * Application specific local rack affinity based request routing to Dynomite nodes.
    * Application resilience by intelligently failing over to remote racks when local Dynomite rack nodes fail.
    * Application resilience against network glitches by constantly monitoring connection health and recycling unhealthy connections.
    * Capability of surgically routing traffic away from any nodes that need to be taken offline for maintenance.
    * Flexible retry policies such as exponential backoff etc
    * Insight into connection pool metrics
    * Highly configurable and pluggable connection pool components for implementing your advanced features.

Installing, Configuring and Running

Diego Pacheco

Popular posts from this blog

Telemetry and Microservices part2

Installing and Running ntop 2 on Amazon Linux OS

Fun with Apache Kafka