Running Kubernetes on AWS with KOPS
Kops is the best way to have Kubernetes running in AWS. Kops allow us to install kubernetes in EC2. Kops is written in Go. Kops helps us to create, update, maintain and destroy kubernetes clusters on aws. Kops also supports GCP(Google Could Platform). Kops has some interesting ability to generate terraform files if that's your you thing :-). For this blog post, we will be using AWS ELB as DNS so we won't be using public DNS records which are done by setting carefully the name of the cluster - which need to end with .k8s.local. Right now is way faster to spin up a kubernetes cluster with Kops rather than EKS.
Installing and running Kubernetes in AWS with KOPS
Cheers,
Diego Pacheco
Installing and running Kubernetes in AWS with KOPS
pip install awscli --upgrade --user
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/darwin/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x kops-linux-amd64
sudo mv kops-linux-amd64 /usr/local/bin/kops
aws configure
export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)
aws iam create-group --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops
aws iam create-user --user-name kops
aws iam add-user-to-group --user-name kops --group-name kops
aws iam create-access-key --user-name kops
aws s3api create-bucket --bucket devpoc.k8s.local --create-bucket-configuration LocationConstraint=us-west-2
export KOPS_STATE_STORE=s3://devpoc.k8s.local
As long as the cluster has the .k8s.local at the end of the name Kops will not use Public DNS. ie: devpoc.k8s.local
aws ec2 create-key-pair --key-name kp_devpoc_k8s | jq -r '.KeyMaterial' > kp_devpoc_k8s.pem
mv kp_devpoc_k8s.pem ~/.ssh/
chmod 400 ~/.ssh/kp_devpoc_k8s.pem
ssh-keygen -y -f ~/.ssh/kp_devpoc_k8s.pem > ~/.ssh/kp_devpoc_k8s.pub
export AWS_REGION=us-west-2
export NAME=devpoc.k8s.local
export KOPS_STATE_STORE=s3://$NAME
kops create cluster \
--cloud aws \
--networking kubenet \
--name $NAME \
--master-size t2.medium \
--node-size t2.medium \
--zones us-west-2a \
--ssh-public-key ~/.ssh/kp_devpoc_k8s.pub \
--yes
First of all WAIT(5mim some times)... Afer AWS create all the boxes
kops validate cluster
kubectl get nodes
kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/ingress-nginx/v1.6.0.yaml
kubectl -n kube-ingress get all
kubectl create -f https://raw.githubusercontent.com/diegopacheco/k8s-specs/master/aws/go-demo-2.yml
kubectl rollout status deployment go-demo-2-api
# make sure aname has *api* otherwise might be 0 1 2 3 .... depending how many you have
CLUSTER_DNS=$(aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[1].DNSName')
curl -i "http://$CLUSTER_DNS/demo/hello"
kops delete cluster devpoc.k8s.local --yes
Cheers,
Diego Pacheco