/ GOODBYE K8S!

Goodbye K8s! This Is Excellent!

We’ve exited Kubernetes after 20 months. Here’s what the platform gave us and how it shaped our multi-cloud technology stack.

This article is part of the “Goodbye K8s” series, looking at the reasons that eventually drove us away from one of the hottest platforms on the market:

  1. Goodbye K8s! We Had So Many Plans!
  2. Goodbye K8s! This Is Excellent!
  3. Goodbye K8s! This Isn’t Working!
  4. Goodbye K8s! Welcome Back, Old Friend!
  5. Goodbye K8s! It’s Still a Business!
  6. Goodbye K8s! Thanks for the Lessons!
  7. Goodbye K8s! We’ll Meet Again!

Ghost and K8s

When first trying to find a suitable solution to the original objectives over 20 months ago, we ended up settling on the Ghost platform. It’s simple. It’s quick. It’s got native Markdown support. What could possibly go wrong?!

Best of all, it even has an API that we ended up leveraging quite heavily via Post2Ghost. Unfortunately, the latter one also severely hamstrung us, as it tied us to version 2 of Ghost for several reasons. But that’s another (and unrelated) story.

Thankfully, deploying Ghost was — and still is — very easy in large parts due to numerous readily available deployment scripts for almost every major target platform. From virtual machines to containers and across cloud service providers.

With other projects already leveraging fundamental cloud compute options such as EC2 instances, it was time to set sail for exploring the world of containerisation. And gain a better understanding of its powers and shortcomings. Its benefits and drawbacks.

The existence of readily available Docker containers and Helm charts significantly lowered the entry bar for us to deploy a Ghost installation into a K8s cluster.

With lots of excitement and appetite for experimentation, we set out on shaping our technology stack around Ghost and K8s. In Terraform. Because repeatability matters. And everything gets versioned.

A Multi-Cloud Stack

All that remained to do at this point in time, was to find a K8s cluster to deploy a Ghost installation into and then create some DNS entries pointing to the resulting Ghost website.

Now, we already had other projects running on AWS. However, we also wanted to try out the managed K8s service from the inventors of the platform.

For that reason, we went to Google Cloud Platform’s (in short GCP) Google Kubernetes Engine (in short GKE) service. And multi-cloud for the final solution. Because: Why not?! What could possibly go wrong?!

AWS for All Things DNS

An existing AWS account of ours already had a hosted zone for domain how-hard-can-it.be. configured in Route 53. So, we went ahead and re-used it.

Eventually, we ended up using the AWS account for all things DNS, including the apex redirect via an S3 bucket (try http://how-hard-can-it.be to see it in action; try https://how-hard-can-it.be for a gap in the solution) as well as A records for the final Ghost website.

GKE for the Rest

On GCP, we provisioned a K8s cluster in GKE, consisting of two e2-small machines (as the same number of g1-small machines turned out to be too small for the required workload) in us-central. The Iowa region was purely chosen on price alone.

Ghost was deployed into the provisioned K8s container using the Bitnami ghost Helm chart. Additional HTTPS support was achieved through LetsEncrypt by deploying the jetstack/cert-manager Helm chart into the K8s cluster as well.

In total, we ended up running between four and five containers in the cluster at any point in time. One container was for the main Ghost server, another one for MariaDB (part of the Bitnami Ghost Helm chart), and between two and three for LetsEncrypt support.

A Bigger GKE Cluster for the Rest

The above setup turned out to be quite minimal indeed. Bravely minimal. Courageously minimal. Too minimal as we learned the hard way. During an outage. Much later on. Because, upon initial deployment, all containers successfully spun up and all services registered successfully. Happy website slinging from there on.

However, when either one of the Ghost or MariaDB containers fell over, they were unable to restart due to lack of cluster resources (we didn’t have cluster auto-scaling enabled at that point in time). And either container would end up hopelessly sitting in the Unschedulable state. No websites to serve to the world. Just 503s. Not good.

In the end, we had to upgrade to a bigger K8s cluster with at least three machines. Not only did it resolve the Unschedulable problem — it also made it disappear for the rest of the time we used the setup. Oh, and enabling auto-repair also did its trick.

K8s Price

The next article in this series highlights the price Kubernetes commanded for the above technology stack with it right at the centre. And all the reasons that eventually drove us away from the platform.

dominic

Dominic Dumrauf

A Cloud Success Champion by profession, an avid outdoor enthusiast by heart, and a passionate barista by choice. Still hunting that elusive perfect espresso.

Read More