Kubernetes — Sequencing Container Startup

There are a number of potential reasons that you might want to sequence container startup — within a Pod, between Pods or even based on availability of external systems. For example:

  • You may need to run required initialization logic before your container(s) start, perhaps changing permissions on a file or populating a cache with initial values or retrieving necessary files from an object storage.
  • You might want to make sure a database or some other service that your container depends on is running
  • A sidecar may require an application container to be ready before it can start (or vice-versa)

If you’ve previously used Docker compose, you might have used (or tried to use) “depends_on” for some of these use cases — which ostensibly allowed you to define dependencies between services. There is no such setting in Kubernetes. This isn’t a bad thing, however, because if have tried to use depends_on for anything non-trivial, you will have learned that it does not wait for the services marked as dependencies to be “ready”, but rather only until they have started — and this is usually not satisfactory.

Let’s look at a few approaches to achieve a better result with Kubernetes.

initContainer(s) for Startup Processing

In addition to “regular” (app) containers, Kubernetes supports initContainers. These are specialized containers that run before the app containers in a Pod, often (but not always) to do initialization steps that need to happen before the app containers start.

Just as with app containers, you can specify multiple initContainers. initContainers run sequentially (not concurrently, like app containers), and each must finish successfully before the next one starts, and all must finish successfully before the app container(s) start. initContainers need not have the same image as regular containers, although often that may make sense.

If you have initialization processing that must be done sequentially before the app container (or containers) start, then initContainers may be exactly what you want. Here is a somewhat contrived example of an initContainer in a Deployment, which makes scripts in the /scripts subdirectory executable, and then executes one of them:

apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-deployment
spec:
selector:
matchLabels:
app.kubernetes.io/name: helloworld-deployment
replicas: 1
template:
metadata:
labels:
app.kubernetes.io/name: helloworld-deployment
spec:
initContainers:
- name: make-scripts-executable
image: helloworld-webserver:1.0.0
command:
- sh
- -c
- |
chmod u+x /scripts/*.sh && \
/scripts/sync-config.sh
resources:
requests:
cpu: 25m
memory: 32Mi
limits:
cpu: 50m
memory: 64Mi
containers:
- name: hello-world
image: helloworld-webserver:1.0.0
resources:
requests:
cpu: 100m
memory: 32Mi
limits:
cpu: 500m
memory: 64Mi

If you wanted, of course, you could structure this as 2 initContainers — one to make the scripts executable, and one to run the sync-config.sh script. And of course, you could likely make the scripts executable in the Docker image rather doing it in the initContainer at all… The key point, however, is that you have options.

initContainer(s) to Wait For Dependent Services

A special case of initialization logic is delaying Container startup until dependent services are ready, whether those dependent services are other Kubernetes Pods or external to Kubernetes entirely.

Imagine that the helloworld Pods that we looked at previously have a dependency on an API to properly start. You might implement an initContainer something like the following:

apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-deployment
spec:
selector:
matchLabels:
app.kubernetes.io/name: helloworld-deployment
replicas: 1
template:
metadata:
labels:
app.kubernetes.io/name: helloworld-deployment
spec:
initContainers:
- name: wait-for-api
image: helloworld-webserver:1.0.0
command:
- sh
- -c
- until [ "$(curl -s -w '%{http_code}' -o /dev/null "http://myapi/health")" -eq 200 ]; do echo "Waiting for API to be ready"; sleep 10; done;
resources:
requests:
cpu: 25m
memory: 32Mi
limits:
cpu: 50m
memory: 64Mi
containers:
- name: hello-world
image: helloworld-webserver:1.0.0
resources:
requests:
cpu: 100m
memory: 32Mi
limits:
cpu: 500m
memory: 64Mi

The hello-world container will not start until the wait-for-api initContainer completes, and this won’t happen until the API health check returns a 200 status code. You’ll want to make sure that whatever you implement adds robustness features as per your requirements (better logging? A timeout on the health verification call?)

You can chain multiple Pods together — if Pod A depends on Pod B which depends on Pod C, you can simply implement similar initContainers on Pod B (to wait on Pod C) and Pod A (to wait on Pod B).

One important point to remember here — your dependent service may become unavailable after your Container has started (and therefore after the initContainers have all completed). Your application needs to be able to handle this scenario as well, and you may need different approaches for this. Handling this scenario may mean that you no longer need to sequence your Pod startup, so make sure that you are looking at your application holistically before proceeding.

Container command for intra-Pod Container Sequencing

If you need to sequence the startup of multiple containers within a single Pod, initContainers will not work. All initContainers must complete or the app containers will not start, and all app containers will concurrently start when all of the initContainers complete.

You can, however, use a very similar approach to sequence containers in the same Pod. For example, if you have a sidecar that needs to wait until the primary application container is running, you might do something like this:

kind: Deployment
metadata:
name: helloworld-deployment
spec:
selector:
matchLabels:
app.kubernetes.io/name: helloworld-deployment
replicas: 1
template:
metadata:
labels:
app.kubernetes.io/name: helloworld-deployment
spec:
containers:
- name: sidecar
image: mysidecar:1.0.0
command:
- sh
- -c
- until [ "$(curl -s -w '%{http_code}' -o /dev/null "http://localhost:8080/health")" -eq 200 ]; do echo "Waiting for API to be ready"; sleep 10; done; /scripts/start-sidecar.sh
resources:
requests:
cpu: 100m
memory: 32Mi
limits:
cpu: 500m
memory: 64Mi
- name: hello-world
image: helloworld-webserver:1.0.0
resources:
requests:
cpu: 100m
memory: 32Mi
limits:
cpu: 500m
memory: 64Mi

Both the sidecar and hello-world container will start at the same time. The sidecar container, however, doesn’t immediately run the /scripts/start-sidecar.sh script. Instead, just like we did in the initContainer example previously, the cmd polls the application started in the hello-world container (by using localhost, the curl request from the sidecar container is routed to the hello-world container within the same Pod, which happens to be listening on port 8080 in my example. This may vary, of course, depending on your specific app container). Once the hello-container returns a 200 status, only then is /scripts/start-sidecar.sh run to actually start the “real” sidecar process.

Note — specifying a command will override what you may have specified in your image.

Not trivial, but still pretty easy, right?

One definite improvement to the technique above should be noted — for Kubernetes dependencies, especially if those dependencies are in your own namespace and especially in the intra-Pod container sequencing scenario, instead of using curl or another service-specific application to verify readiness, you should consider checking the readiness of the container or associated service with kubectl or the Kubernetes APIs instead. This will make your solution much more reusable, and decouple the containers from each other. This does increase the complexity, however, and the focus of this story is intended to be the general principles involved. In the future, however, I will provide an example of implementing this pattern — so stay tuned!

--

--

--

Distinguished Architect at Sun Life Financial. Focused on containers & Kubernetes. Views & opinions expressed here are my own, not necessarily those of Sun Life

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Training models with Kubernetes & Kubeflow

Extracting the Wrapped Function from a Decorator on Python Call Stack

ICYMI: My Certification Journey

SQL Joins : Beginner’s Ride in SQL Universe

AWS Mac instances with Terraform

Micro Service Architecture — Design Patterns (Decomposition Pattern)

Using the CSS Star Selector to Zero Out Margins and Paddings

Create A Robust Predictive Fantasy Football DFS Model In Python Pt. 3

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Paul Dally

Paul Dally

Distinguished Architect at Sun Life Financial. Focused on containers & Kubernetes. Views & opinions expressed here are my own, not necessarily those of Sun Life

More from Medium

Kubernetes Application High-Availability — Part 2 (More Basics)

Version Control of Configuration Files Using Kubernetes

Creating Kubernetes operator using Kubebuilder

Migrating to containerd and CRI-O after Dockershim Deprecation in Kubernetes 1.24