/ GOODBYE K8S!

Goodbye K8s! We'll Meet Again!

We’ve exited Kubernetes after 20 months. Here are some lessons for the enterprise use of K8s, its overall potential, and why we’ll meet again in the future.

This article is part of the “Goodbye K8s” series, looking at the reasons that eventually drove us away from one of the hottest platforms on the market:

  1. Goodbye K8s! We Had So Many Plans!
  2. Goodbye K8s! This Is Excellent!
  3. Goodbye K8s! This Isn’t Working!
  4. Goodbye K8s! Welcome Back, Old Friend!
  5. Goodbye K8s! It’s Still a Business!
  6. Goodbye K8s! Thanks for the Lessons!
  7. Goodbye K8s! We’ll Meet Again!

K8s in the Enterprise

We might just be a small blog website run by a couple of enthusiast with a background in Financial Services. However, we still believe that aspects of our lessons learned from running K8s in production for 20 months are applicable in other areas as well.

Especially, when planning on leveraging K8s in a more stringent environment such as a corporate one. We clearly had no such constraints and our solution would have been quite different if we would have had them. If we would have used K8s at all.

Cluster Creation and Management

Clusters are a fairly non-trivial concept to understand. Let alone implement correctly. A cluster is essentially a state engine that also monitors itself and tries to repair broken states.

As such, clusters should ideally be self-contained so that external dependencies do not cause catastrophic failures. However, multiple abstraction layers can lead to the misalignment of the underlying clustering models. K8s is no exception, thus not necessarily easing its creation or use.

While we were able to outsource the entire problem of creating and operating a K8s cluster to GKE that may not be possible in an enterprise environment for reasons such as security, controls, compliance, or regulatory requirements. Again, that choice comes at a price.

A fair amount of scaffolding exists to automate away the undifferentiated heavy lifting of manually creating a cluster. However, the task of creating a K8s cluster can still pose its own challenges and introduce yet another layer of complexity into an already non-trivial technology stack. In other words,

Are you in the business of creating or just consuming K8s clusters?!

Note that entering the cluster creation and management business brings along with it its very own ecosystem. How long certain (competing) tools will be around and supported is anyones guess. It might be worth starting with the exit strategy when embarking on that business venture.

Separating cluster creation and management from its eventual utilisation certainly introduces the option for the segregation of duties. However, the problem of still deploying an app into a cluster — that now needs to be created beforehand — “in one go” might become even more challenging. More interfaces. More moving parts. More things that can (and will) go wrong.

Also, bare in mind that our technology stack was self-contained. We simply did not have the requirement to integrate with external services that may arise in an enterprise environment. The question then here really becomes how many external services are ready to work with and evolve at the speed of K8s?!

Container Creation and Management

In our unconstrained use case, leveraging readily available open source Helm charts and corresponding containers was an acceptable compromise.

This is unlikely to be an acceptable solution for a Cybersecurity-aware corporate environment that focusses on software supply chain security or is constrained by regulatory requirements.

So, the only realistic option in that case is to build the containers in-house from the ground up. Or have them delivered through a contracted third party supplier.

The details (and quirks of containerising applications in certain languages) are out of scope of this article and can be found online. Time well spent!

While we left the logging and monitoring configuration to the GKE defaults, managing containerised apps in a corporate environment requires a fairly non-trivial setup for enterprise level observability. Again, a new world.

A Brave New World

In this new world of infrastructure, a lot of overhead might be required to get things running, especially when pioneering use cases. Especially in the enterprise world where two firms are hardly every identical. While you can run almost anything in a K8s cluster, there might be better options available. We learned that the hard way.

An enterprise K8s setup is already non-trivial and the abstraction layer makes maintenance, troubleshooting, and analysis a lot harder. It almost certainly requires new tooling as well.

K8s is an ever evolving ecosystem where not only the platform but also the tooling seems to evolve at the speed of light. This might result in a situation where yesterday’s state-of-the-art technology stack and toolchain are today’s legacy system and tomorrow’s compliance risk.

K8s APIs or other crucial custom additions may have changed, been deprecated, superseded, or even entirely removed. Support for tools may have ceased as the supporters moved on. This can lead to the feeling of trying to hit a moving target. And we haven’t even touched on the security aspects (or the afterthoughts thereof) of K8s.

There have been countless contributions to K8s over the past years, but not all of them may have necessarily had an enterprise environment in mind.

If enterprises are truly interested in the large scale adoption of K8s, then maybe it’s time for the enterprise community to get more involved in supporting the enterprise readiness of K8s.

Such as: Standardisation. Stability. Reliability. Prolonged Support. Basically, guarantees that things will still work when they are no longer the main focus of attention. But are still underpinning some fairly important processes.

While the above only speaks to K8s, the same also holds for containers. They define their very own world and ecosystem. Containerising applications with observability, traceability, graceful failure handling, and the ability to handle container restarts is no trivial task.

The Potential of K8s

Now, the lessons we’ve learned and the problems K8s can introduce in a corporate environment may come across as negative. Some might even say they are borderline scaremongering. But then those lessons were mostly learned the hard way. In both worlds.

The web is full of articles singing the praises of K8s. And yes, there’s a decent chance they provide an accurate representation of reality. However, I firmly believe that disconfirming evidence is one of the most valuable pieces of information. Hence, why we mainly focus on the things that didn’t work as expected. Again, in both worlds.

This could lead to the impression that we’re discouraging the use of K8s. We’re absolutely not. In fact, we’re big advocates of the platform. But for the right reasons. And with a holistic view of its capabilities and drawbacks.

K8s is a superb IaaS for containers with amazing powers; it’s extremely flexible and capable of running almost anything. But it also comes with additional layers of abstraction, an ever evolving ecosystem, and lots of moving parts that subsequently provide numerous opportunities to shoot oneself in the foot. More than once.

Nonetheless, the platform’s flexibility it is also one of its biggest assets. It drives adoption. And it keeps cloud providers on their toes.

K8s has the potential to become the great equaliser in the “cloud wars”. Eroding decades of platform-specific competitive advantages (read: stickiness) and providing true portability between cloud providers.

But then why would any cloud provider build and support something that is essentially a fungible commodity good?!

Trends and Longevity

In recent years, K8s has become one of the hottest platforms on the market. Together with containers, they seems to have become the darlings of the developer community by now.

Early adoption, or large scale experimentation, might be easy in fairly unconstrained environments such as in our case. However, the picture changes quite a bit when moving to a corporate environment.

It’s usually a world of large scale ecosystem integrations — sometimes with legacy systems — and decade long programmes. Prolonged support together with availability and reliability of the underpinning platform offering throughout the entire programme lifecycle matter more than perceived attractiveness. And never forget the business case! So,

If none of your peers have adopted K8s, then ask yourself why! Yes, it could be a case of ignorance or incompetence. But it could also be a conservative approach to managing risk. Or K8s simply didn’t make sense from a business perspective.

Ironically, the business aspect also extended to our use case and ultimately turned us away from K8s. We’ve learned to use what’s right for our objective rather than what’s hot at the moment. Use what makes sense from an economical level.

Sometimes, special purpose tools outperform general purpose ones such as in our case with the combination of AWS S3 and AWS CloudFront. When expertise is required, it makes sense to leave it to the experts. Push the solution design up the stack as far as possible, accepting the limitations but at the same time embracing the ability to focus on what really matters.

So, How Do You Use Kubernetes?!

While the above Worx for Me!™ when it comes to exiting Kubernetes and moving to a more scalable AWS S3/CloudFront solution in order to serve some blog pages to the world, you may have an alternative or better way.

Think this is all rubbish, massively overrated, or heading into the absolutely wrong direction in general?! Feel free to reach out to me on LinkedIn and teach me something new!

As always, prove me wrong and I’ll buy you a pint!

Acknowledgements

The author would like to especially thank Claudio Moreno for his prolonged patience and invaluable feedback, built on decades of real-life experience, while proof reading early versions of this article series.

Claudio — you might be my fiercest critic (and you definitely possess the power to annoy me with that at times) but then you are also my most valuable critic. Keeping me honest. Keeping me focussed. Much, much appreciated, my friend!

dominic

Dominic Dumrauf

A Cloud Success Champion by profession, an avid outdoor enthusiast by heart, and a passionate barista by choice. Still hunting that elusive perfect espresso.

Read More