Running Istio on AWS with Kops

In previous posts, I show how to run Istio in Minikube and with Docker-Compose/Consul in local env, today I will show how to run on AWS using KOPS.

This installation is Linux based(Ubuntu), I'm running all commands from my local-desktop, if you don't use Linux(shame on you) you can create a virtual-machine on AWS with ubuntu and run this commands there, also is possible to run Vagrant with Linux and them run this commands on Vagrant box as well. Istio runs smoothly in AWS with Kops. You don't need much, pretty much 3 machines(1 master node, 2 minions).  Keep in mind this is not a production-grade setup, for production, you should be running with 3 masters at least for High Availability.




Installing and Running Istio with Kops

Setup Kops

Local Linux

Install AWS cli

pip install awscli --upgrade --user

Install Kubectl

curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/darwin/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

Install Kops

curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x kops-linux-amd64
sudo mv kops-linux-amd64 /usr/local/bin/kops

AWS Setup

Setup Permissions

aws configure
export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)

Create Kops user and permissions

aws iam create-group --group-name kops

aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops

aws iam create-user --user-name kops
aws iam add-user-to-group --user-name kops --group-name kops
aws iam create-access-key --user-name kops

Create a Bucket(SAME name of the cluster)

aws s3api create-bucket --bucket devpoc.k8s.local --create-bucket-configuration LocationConstraint=us-west-2
export KOPS_STATE_STORE=s3://devpoc.k8s.local

DNS Configuration

As long as the cluster has the .k8s.local at the end of the name Kops will not use Public DNS. ie: devpoc.k8s.local

Setup the KEYS

aws ec2 create-key-pair --key-name kp_devpoc_k8s | jq -r '.KeyMaterial' > kp_devpoc_k8s.pem
mv kp_devpoc_k8s.pem ~/.ssh/ 
chmod 400 ~/.ssh/kp_devpoc_k8s.pem
ssh-keygen -y -f ~/.ssh/kp_devpoc_k8s.pem > ~/.ssh/kp_devpoc_k8s.pub

Create a Clsuter with KOPS

export AWS_REGION=us-west-2
export NAME=devpoc.k8s.local
export KOPS_STATE_STORE=s3://$NAME

kops create cluster \
--cloud aws \
--networking kubenet \
--name $NAME \
--master-size t2.medium \
--node-size t2.medium \
--zones us-west-2a \
--ssh-public-key ~/.ssh/kp_devpoc_k8s.pub \
--yes

Deploy Validation

First of all WAIT(5mim some times)... Afer AWS create all the boxes

kops validate cluster
kubectl get nodes

Prepare cluster for Istio

kops edit cluster $NAME

Add

kubeAPIServer:
    admissionControl:
    - NamespaceLifecycle
    - LimitRanger
    - ServiceAccount
    - PersistentVolumeLabel
    - DefaultStorageClass
    - DefaultTolerationSeconds
    - MutatingAdmissionWebhook
    - ValidatingAdmissionWebhook
    - ResourceQuota
    - NodeRestriction
    - Priority
kops update cluster --yes
kops rolling-update cluster --yes
for i in `kubectl \
  get pods -nkube-system | grep api | awk '{print $1}'` ; \
  do  kubectl describe pods -nkube-system \
  $i | grep "/usr/local/bin/kube-apiserver"  ; done

Install Istio

git clone https://github.com/istio/istio.git
cd istio/
git checkout tags/1.0.5
sudo cp bin/istioctl /usr/local/bin

Installing with default mutual TLS auth

kubectl apply -f install/kubernetes/istio-demo-auth.yaml

checking the installation

kubectl get svc -n istio-system
kubectl get pods -n istio-system
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml

Deploying bookinfo app and Testing the App

kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/platform/kube/bookinfo.yaml)
kubectl get services,po
URL="http://$(aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[1].DNSName'):80/productpage"
curl -o /dev/null -s -w "%{http_code}\n" $URL
xdg-open $URL

Prometheus Access

kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=prometheus -o jsonpath='{.items[0].metadata.name}') 9090:9090
xdg-open http://localhost:9090/graph?g0.range_input=1h&g0.expr=istio_request_bytes_count&g0.tab=0

Generating Service Graph

kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=servicegraph -o jsonpath='{.items[0].metadata.name}') 8088:8088
xdg-open http://localhost:8088/force/forcegraph.html

Metrics Dashboard with Grafana

kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000
xdg-open http://localhost:3000/d/1/istio-mesh-dashboard

Distributed Tracing with Jaeger

kubectl port-forward -n istio-system $(kubectl get pod -n istio-system -l app=jaeger -o jsonpath='{.items[0].metadata.name}') 16686:16686
xdg-open http://localhost:16686/search

Install Kiali

bash <(curl -L http://git.io/getLatestKialiKubernetes)
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=kiali -o jsonpath='{.items[0].metadata.name}') 20001:20001
xdg-open http://localhost:20001/

Destroy the cluster (kops)

kops delete cluster devpoc.k8s.local --yes


Master and Worker nodes on AWS EC2 Console










Istio Metrics in Grafana

















Jaeger - Distributed Tracing






















Kiali - Observability

















BookInfo ServiceMesh (4 microservices) running on Istio / Kubernetes in AWS

Prometheus(Cloud-Native Observability) - Metrics, Dashboards, and Alerts 

















ServiceGraph























That's it - I hope you enjoyed.

Cheers,
Diego Pacheco

Popular posts from this blog

Having fun with Zig Language

C Unit Testing with Check

Cool Retro Terminal