Kubernetes – What You Should Know When Your Development Team Proposes Using Kubernetes

Paul Dally
6 min readFeb 16, 2022

Kubernetes makes developers more self-sufficient

Kubernetes includes a lot of “software-defined” capability, including compute provisioning, load-balancing, firewalls, basic availability monitoring and automated health restoration, storage allocation and high availability deployment capability. In traditional environments, development teams often need to engage other teams for these capabilities. Kubernetes makes it simple enough that with a modest amount of training and hands-on experience, developers can be self-sufficient for much of what in the past they might have needed other teams for. Fewer handoffs generally leads to high higher productivity and lower costs.

Kubernetes will reduce application time to market

Developers can easily have an entire Kubernetes environment locally, allowing activities to “shift-left”, which usually results in faster iterations and higher quality.

And as noted above, there will likely be less handoffs and requests for services provided by other teams, which should also reduce elapsed time.

Finally, Kubernetes can reduce the amount of automation that you might otherwise need to create/acquire to implement the same level of capability in your Dev/Ops pipelines using other approaches. Less time working on automation means more time working on the application, which should get you to market more quickly.

Kubernetes can run more applications with the same compute infrastructure

Kubernetes has a number of features that will allow more applications to share the same compute infrastructure. For example, HorizontalPodAutoscaler allows Kubernetes to dynamically increase the number of replicas when an application is busy, but scale it back when the load subsides. Pod requests/limits allow your workload to reserve what they need for “normal” operation, while having additional shared resources available when necessary — and since it is unlikely that all of your workloads will see peak utilization at the same time, this allows more applications to be deployed on the same underlying compute capacity (thereby reducing run-rate charges).

But importantly, Kubernetes does this while maintaining a high degree of isolation between the applications sharing the compute capacity. Applications are deployed to containers that for all intents and purposes act like discrete servers (but with significantly reduced overhead), thereby eliminating most of the adverse application interactions that would be likely if you deployed the applications to a single conventional virtual machine, and much of the overhead of deploying the applications to their own virtual machines. And you don’t have to install the application onto each worker node, Kubernetes simply automatically pulls the image onto the worker node and starts it up in response to scaling up of the application capacity.

Kubernetes can increase application reliability

When an instance of an application crashes, Kubernetes will by default automatically restart it. It can easily be configured to do the same when service becomes degraded.

If a worker Node running your application fails, in most cases Kubernetes will simply start it on a different worker node. You can easily configure your application to run in multiple availability zones (or even regions, in some configurations), often allowing your application to survive availability zone (or region) outages with no impact to your clients.

Kubernetes is required for some applications

Some vendors are beginning to only distribute their software as container images and only support it under Kubernetes. If you use legacy versions of such software, or would like to be positioned to run such software moving forward, you’ll need to have Kubernetes capability.

Kubernetes has a low vendor/provider lock-in

You can fairly easily deploy a Kubernetes application that works in — for example — EKS to AKS. Or from GKE to EKS. Or to OpenShift from Rancher and so on and so forth. It isn’t that there are zero ways that you can find your self with provider-specific details, but the level of provider-specific details is relatively minimal.

Of course, as an example, if your application itself is coded to access a DynamoDB specifically using the AWS SDK, moving to another cloud-provider would become more difficult. But all other things being equal, reducing lock-in may not be a bad thing.

Kubernetes can accelerate migrating to the cloud

Because of the low vendor/provider lock-in, you can run Kubernetes clusters both on-prem and the cloud. Or a single “stretch” cluster both on-prem and in the cloud. In a complex application eco-system, this can really help migrating to the cloud. You can migrate traditional applications into containers on-prem in stages, and then as a final step simply deploy the whole mess to the cloud without requiring significant re-engineering.

Kubernetes can make multi/hybrid cloud more feasible

Instead of having Kubernetes clusters on-prem and in just one cloud provider, you can have clusters in multiple cloud providers, or even a single cluster across multiple cloud providers.

This will allow you to easily deploy the same application to multiple cloud providers without substantial changes or even in the case of a “stretch” cluster, to have a single application deployment run on multiple cloud-providers concurrently, or to seamlessly and automatically move between them based on capacity and preferences.

But even the good things in life aren’t completely free…

Kubernetes requires an investment in time and skills

It will take a while before your development team is fully proficient in Kubernetes. Education is important, but experience/trial-and-error is critical as well. And while you get a lot of functionality for a very small comparative ramp-up, there is still a ramp-up.

Think of it this way — (the numbers aren’t scientific, but should illustrate the point) with Kubernetes your application developers should be able to handle 80%+ of what they previously relied on network, server and storage resources for, with 5% of the training/ramp-up time. That’s a great deal! But please, resist the urge to think that with Kubernetes everything comes for free.

Kubernetes may require an investment in infrastructure

You will need to provision a platform to run Kubernetes. Hybrid Kubernetes distributions like OpenShift, Mirantis, Rancher, EKS-Anywhere will require compute capacity for their control plane and possibly image registries if you are deploying your own registry. Hosted options like EKS, AKS, GKE and Rosa will abstract the control-plane from you, but you will need to pay a small hourly (or monthly or whatever) fee.

Does your monitoring solution integrate with Kubernetes? If not, you may need to upgrade/replace/augment it. Vulnerability management? Same thing. You should expect that there will be a variety of costs, just as there would be for any new technology, to get started — at least if you want a quality result.

Kubernetes will cost more at first (until it doesn’t)

Your developers will need to learn how to use Kubernetes as discussed above. The infrastructure investments in terms of monitoring and supportability will need to be put in place and maintained. If you only have one application that you run in Kubernetes, you may not break-even on the initial costs.

Kubernetes is not a good fit for all applications

Some applications are not a good fit for Kubernetes. Or perhaps more accurately, there is a “spectrum of good fit” and not all applications are at the far right end. Stateless microservices/APIs are generally a fantastic fit. Databases that have been designed specifically with containers/Kubernetes in mind like MongoDB can be an excellent fit. However, databases like Oracle are in my opinion probably not a great fit… for licensing reasons as well as technical reasons, at least if you need high-availability).

This isn’t just a question of stateless applications are good and stateful applications are bad, because there are any number of stateful applications that are fantastic candidates for Kubernetes. A full dissertation on what makes an application a good candidate is unfortunately beyond the scope of this article (although that topic is on the backlog of topics for the future), but a good approach might be to start with stateless applications (usually good candidates) and then start considering stateful applications gradually once your team has acquired some proficiency with Kubernetes.

In short, it may be possible to run almost anything in Kubernetes, but that doesn’t mean that you should run everything in Kubernetes.

Conclusion

What makes Kubernetes interesting is the business benefits that it provides. Sure, there are organizations out there that have failed to achieve the business benefits I’ve stated above, but typically this happens when there has been uninformed expectations, a failure to invest or a failure to weed out the applications that are not a good fit.

Many organizations are running Kubernetes very successfully, at scale and for critical applications. If your organization isn’t already using Kubernetes, you probably should consider doing it too!

--

--

Paul Dally

AVP, IT Foundation Platforms Architecture at Sun Life Financial. Views & opinions expressed are my own, not necessarily those of Sun Life