Kubernetes – An Introduction to Sidecars

Paul Dally
4 min readJun 3, 2022

--

https://www.maxpixel.net/Oldtimer-Bmw-500-Isolated-Old-Motorcycle-Sidecar-4344066

What is a sidecar?

In the context of Kubernetes, a sidecar is simply a container that is co-located with and tightly-coupled to your primary application container(s). The sidecar can also share resources (like network and storage) with the primary application container(s).

You want a container to have only a single concern, which is why the sidecar is discrete from your application container(s). The functionality of a sidecar, however, should be inter-dependent with the application containers — if it isn’t, you would probably be better served to simply deploy the container in a different Pod.

If you look at the picture of the motorcycle with the sidecar above, you can see that the sidecar does not have independence — it goes only to destinations that the motorcycle goes, at the speed that the motorcycle driver chooses, etc. It’s the same thing with sidecar containers. You wouldn’t bolt together two motorcycles and call one of them a sidecar.

An example scenario

Imagine that 2 different teams are working together on an application. A front-end UI team creates static web content and an operations team is responsible for deploying/hosting that content. The UI team stores the content they create in a private internal repository (I’ve simply used GitHub for this example, but it could be an internal SCM, an S3 bucket, an NFS or SMB file share, etc.)

The operations team is using a static web server Pod in Kubernetes to deliver the static content. Each time the content changes, they rebuild a Dockerfile similar to the following:

Dockerfile

FROM nginx:1.15.9-alpineADD nginx.conf /etc/nginx/
ADD www/* /www/

The Dockerfile is built as follows:

docker build --progress=plain --no-cache -t helloworld-webserver:1.0.0 docker\

and deployed with a manifest similar to the following:

apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-deployment-1
spec:
selector:
matchLabels:
app.kubernetes.io/name: helloworld-deployment-1
replicas: 2
template:
metadata:
labels:
app.kubernetes.io/name: helloworld-deployment-1
spec:
containers:
- name: hello-world
image: helloworld-webserver:1.0.0

Soon everyone realizes how awful this approach is!

The static content is being added to the image at build time, which means that an image build and application redeploy will be required for all changes to the static content. Depending on the level and nature of automation involved, it may take some time and some handoffs/process for the build/redeploy to happen. This may be reasonable in some contexts, but at scale this will likely become rather unpleasant!

Example — version 2 (sidecar)

We can easily enhance this design with a sidecar. A sidecar could periodically synchronize the content directly from the source repository into a volume shared between the application container and the sidecar container, entirely eliminating the need for rebuilds and redeploys when the content changes and likely reducing elapsed time for new content to be available dramatically.

Because I’m using github as the source of the static content in this example, I create another image with the git binaries. You could use any number of different approaches (object storage using AWS or AKS or GCP CLI, wget/curl, smbclient, etc.) depending on your specific requirements.

Dockerfile

FROM alpine:latestRUN apk update && \
apk --no-cache add \
git

Which we build as follows:

docker build --progress=plain --no-cache -f docker\Dockerfile.getcontent -t get-content:1.0.0 docker\

The sidecar is simply defined as a second container defined in the podSpec. In our case, because we want to share files, we also create an emptyDir volume and mount it into each of the containers:

apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-deployment-2
spec:
selector:
matchLabels:
app.kubernetes.io/name: helloworld-deployment-2
replicas: 2
template:
metadata:
labels:
app.kubernetes.io/name: helloworld-deployment-2
spec:
volumes:
- name: www-content
emptyDir: {}
containers:
- name: hello-world
image: helloworld:1.0
volumeMounts:
- mountPath: /www
name: www-content

- name: getcontent-sidecar
image: get-content:1.0.0
command:
- bash
- -c
- |
git clone
https://github.com/psdally/k8s-sidecars.git /www
while true; do
git fetch
sleep 60
done

volumeMounts:
- mountPath: /www
name: www-content

The volume sharing technique lets the sidecar read files produced by the application container just as it lets the application container read files produced by the sidecar.

Every 60 seconds, the sidecar in this Deployment clones the contents of our repo into a volume shared between the sidecar and the helloworld-webserver container. For your information, any files that were in the /www folder in the image have been “layered over” because of the volumeMount.

The command should likely have been put into a script file, and built into the image rather than specifying it in the sidecar’s podSpec — but I’ve used this approach to make the example as transparent as possible.

Conclusion

Sidecars can have lots of uses beyond what is shown above — for example, introducing debugging tools into your Pod, dynamically injecting secrets into Pods from a secrets manager, implementing cross-cutting concerns (e.g. logging, authentication, authorization, etc.). The possibilities are limitless!

Typical disclaimers apply — this code is meant to illustrate the sidecar concept as succintly as possible. Do not view these examples as feature-complete or fully-ideal configurations — for example, you should consider adding probes, resources configuration, rolling update strategy, perhaps nodeSelectors, affinity, anti-affinity, topologySpreadConstraints, you shouldn’t run as root, scripts should probably be defined in the image rather than in the Deployment, etc. You should also make sure that you are managing the space utilization of the emptyDir — see this article for more information on that.

The source code used for this article can be found at github.

--

--

Paul Dally

AVP, IT Foundation Platforms Architecture at Sun Life Financial. Views & opinions expressed are my own, not necessarily those of Sun Life