Kubernetes vs AWS ECS after my own experience using them.

kubernetes vs ecs

At my job we went ahead and decided to move some of our applications to Docker container. We had to decide on a technology that would run our containers on production . 99% of our stuff runs in AWS so the choice by the team was to use ECS. Part of it is that ECS is a managed service from AWS, there is great documentation and it gave us freedom of managing the cluster.

here are the pros and cons of using ECS after a year of running it in production.

Pros
Managed by AWS but there are some gotchas
Documentation is clear
it is fairly easy to run containers on it.
Streaming logs into cloud watch is very easy.

Cons
The console is awful
tooling is a bit of a pain. We managed it by using cloud formation templates.
ECS agent needs to be upgraded all the time . The ecs agent is the container that communicates to the AWS ECS control plane and launches containers or stops them but AWS is always pushing updates that you need to work out the details on how to do the updates of the agent.
Cron scheduled tasks are awful to manage no visibility since everything runs via cloud watch events.

you still have to do some support of the instances and provision them and though you use cloud formation templates for that you still have to manage disk space, sizing and monitoring of the ec2 instance.

Which then made me look into AWS FARGATE which basically allows us to launch containers in ECS without having to manage the ec2 instances everything is transparent to us. Which is great so our job is only to manage container deployments. It also allows containers to connect to services inside our vpc without us having to launch instances inside our vpc. They manage all that magic internally at AWS and the security.

But why did I look into Kubernetes.

Kubernets is everywhere and it is taking everything that gets into its path. I used to work at a company where they had been using kubernetes in CoreOS the whole thing was a mess. We provisioned the instances with ansible and then deploy kubernetes on it. I think the problem at the time was the way that things got launched and it was hard for engineers to understand how it all worked. It was all in place already before I joined the company and everyday we had problems.

A lot of the problems with Kuberentes is mostly misunderstanding how it works, and how to set it up and how to launch applications. I recently build a cluster to test stuff and use it as a playground. Configuring the cluster manually and not using Kops was much better for me to understand how everything works and how to debug it if it breaks.

Even when I configured everything by hand it didn’t take long. I was eating a cereal bowl and watching youtube videos when I started the project in about and hour and a half I had Kubernetes deploy and Kubernetes UI. I knew how to provision nodes, make deployments, look at logs, describe services and expose services via a load balancer in less than 2 hours. Which is great as an exercise.

here is a piece of technology that a lot people are using and I was able to get up and running in less than 2 hours. The next day I deployed a guestbook application by applying some deployment and service yams files and it all worked. Then I build a container of an app I was working and was able to deploy it to my cluster pretty easily. I know this is just a proof of concept but I feel confident of launching and supporting an environment running kubernetes in production. Maybe at one point I will pitch this to the team at work.

The reason why is that the documentation for Kubernetes is very concise and clear. The stuff just works and once you understand the principles behind of how kubernetes works then you can basically get up and running. maybe we can start with staging environments and then move into production.

Also there is a TON more of tooling for Kubernetes than there is for ECS.

Amazon will come up with EKS soon , there is no doubt that Kubernetes is the big gorilla room. Kubernetes is fun and pretty cool technology.

I am looking forward to my journey into production kubernetes.

Enable Termination Protection on AWS instances the dirty way

meme

At work we use chef for provisioning instances. ( I hate chef ). but it is what it is.
our instances did not have termination protection enabled and we have many large nodes running specialized databases and I didn’t want anyone to terminate an instance by mistake using the AWS console.

Sometimes people are clicking around and they terminate a node on the console or even using the api.

So in order to change termination protection using the aws cli you need to get the instance ID of the instance to apply the modify-attribute and enable termination protection. I used Ansible to setup host groups for the instances. so I ran this quick command line one liner to enable me to first connect to the instance and use curl to get the instance id from the local ec2 meta-data and then enable termination protection on the instance.

for ec2_instance in `ansible -i inventory/$1.yml $2 -m shell -a "curl -s  http://169.254.169.254/latest/meta-data/instance-id" | grep -v rc | grep -v WARN` ; \
do aws ec2 modify-instance-attribute --instance-id $ec2_instance  --attribute disableApiTermination --value true ; done

in here $1 and $2 are the inventory either prod or staging and $2 is the host-group I want to get the instance id’s

I know I could do this with AWS custodian or other tools but this was very quick.