We’ve exited Kubernetes after 20 months. Here’s why a realistic assessment drove us back to an old friend. And what we’re getting now.
This article is part of the “Goodbye K8s” series, looking at the reasons that eventually drove us away from one of the hottest platforms on the market:
Time for A Realistic Assessment
Despite the experienced financial and management setbacks surrounding the use of K8s, we kept on using the platform. Steadily improving our stack. Automating away what we could. In the end, we managed to keep the blog limping along on K8s for 20 months.
But after churning through seven GMail accounts — just to unlock the accompanying USD 300 of free GCP credits after sign-up to the platform — and countless weekends spent trying to keep our technology stack afloat, it was time for a realistic assessment of the entire situation.
In reality, we had managed to put ourselves in an unsustainable situation.
Not only were we paying a fortune in virtual free cash (that could dry up any second) for something fairly non-complex. But we also were chasing a moving target that added multiple layers of complexity. While at the same time trying to keep up with an ever changing ecosystem.
All this took away valuable time that we couldn’t use for other things that mattered. Like writing articles. Not to mention our families. We were drowning in the management of the platform, avoiding outages best we could. At the end, it was simply too much.
Something had to change. All we were really trying to achieve, was to serve quality content people actually wanted to read. For that, we were paying an enormous price.
There had to be a better way of achieving the same. And if not the same, then something similar. Back to the drawing board!
Time for Some Drastic Simplifications
Like in many other projects, this was a great opportunity for some radical simplifications.
If all we wanted to do is jot down some thoughts on a couple of web pages and then distribute them to the world — why not just do exactly that?! Cut out all the other fluff!
Also, why did we have to solve authoring and serving articles using the same tool?
Despite the selection of Ghost as our blogging platform of choice, we’ve always kept our content independent of any underlying platform, trying to avoid vendor lock-in as much as possible.
This was achieved by authoring all articles as plain text Markdown files. Additional versioning in Git not only gave us a full history and traceability of any changes but also a golden source of truth that evolved with us.
All we needed now was a piece of software that would take those plain text Markdown articles and convert them to web pages for the world to look at.
This clearly wasn’t a new concept. And it has in fact been solved many times before. And for some reason, solutions are still being developed. In fact, there’s an ever growing list of static site generators. There certainly seems to be some excitement in that segment.
After a lengthier evaluation period, we ended up settling on Jekyll. It gave us most of the features we cared about while at the same time producing static websites that closely resembled our then current theme (courtesy of the great Jasper2 theme).
A nice side effect of this new setup is that it now allows us to treat the blog like any other software project. Including all the benefits. Use the Git repository as the golden source of truth. Leverage a CI/CD pipeline for automated and early feedback as well as artefact builds.
We now even have tests in our CI/CD pipeline! Once staged through the environments, we’re able to release new versions of the blog just like any other software.
But where do we release to?! Whose slinging our websites to the world?!
Enter AWS S3 and AWS CloudFront
Well, the problem of serving static websites to the world has a fairly simple, well-know, and powerful answer that was right in front of us the whole time: AWS S3 static websites!
Combined with CloudFront, we not only gained HTTPS support through an AWS ACM certificate (we care about privacy) but also a content distribution network (in short CDN) that allows us to distribute our websites at an almost insane speed. And boy does that matter to certain search engines.
Combined with a dedicated least privileged IAM user who’s only allowed to perform basic CRUD operations on the underlying S3 bucket as well as invalidate the CloudFront distribution, our new technology stack was complete!
The new technology stack isn’t just simpler. It’s also a whole lot cheaper. There are no more machines to run. No more storage volumes to drag around. No more clusters to worry about. And as an additional benefit: no more backends exposed to the world.
We now effectively only pay for the AWS S3 storage we consume and the general AWS CloudFront egress. At the time of writing, all pages in the entire domain came in at just under 25 MBs. The AWS Route53 costs haven’t changed much either.
Our costs have dropped from hundreds of pounds a month to the price of a coffee. Moreover, our stack is now much more stable with less moving parts and a readily available established ecosystem.
Leaving the problem of serving web content as quickly as possible to the AWS CloudFront experts allows us to piggyback off that speed bonanza. We’re already seeing improved page load speeds and increased visitor numbers. However, it’s too early to tell if the two are actually related.
The Business Lessons Learned
The next article in this series starts the dive into the lessons learned by summing up everything we wish we would have known about the business case we never made.
While there’s a decent chance not all of our lessons learned might be applicable to other areas, there’s enough opportunity to avoid re-learning them. No need to repeat our mistakes!
Subscribe to How Hard Can It Be?!
Get the latest posts by following our LinkedIn page