Accessing Kubernetes API Server When There Is An Egress NetworkPolicy

Paul Dally
3 min readApr 21, 2022

Sometimes you need to access the Kubernetes API server from within your Pods. For example, you might need to update a ConfigMap or Secret or modify a Pod’s labels. Or perhaps you might want to update a Deployments desired replicas or cleanup evicted Pods.

However, you may also want to protect your Pod with an Egress NetworkPolicy. If you’ve tried this, you likely have found that it didn’t work (at least at first) — and this may even have caused you to abandon the NetworkPolicy. Or, you may have implemented a NetworkPolicy that is overly broad. This article will help you successfully deploy a NetworkPolicy that allows the required connections with very little else.

Consider the following CronJob:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronjob
spec:
schedule: "* * * * *"
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 10
jobTemplate:
spec:
template:
metadata:
labels:
app.kubernetes.io/name: getcontent
app.kubernetes.io/part-of: helloworld
app.kubernetes.io/instance: default
app.kubernetes.io/component: cronjob
spec:
nodeSelector:
topology.kubernetes.io/region: slf-canada
containers:
- name: getcontent
image: helloworld-webserver:v1.0.0
imagePullPolicy: IfNotPresent
command:
- /bin/bash
- -c
- |
curl -o /tmp/example.html http://example.com
kubectl create configmap example-html --from-file=/tmp/example.html --dry-run=client -o=yaml | kubectl apply -f -
restartPolicy: Never
backoffLimit: 0
parallelism: 1
completions: 1

This CronJob downloads HTML from example.com and uses that HTML to update a ConfigMap. Your requirements may vary, but I’m sure that you can imagine your own use-cases being conceptually similar.

The CronJob makes 3 connections:

  • kube-dns.kube-system.svc.cluster.local:53 — for DNS resolution. Some Kubernetes distributions may use a different service name — but generally speaking whatever is handling DNS is deployed to the kube-system Namespace. If the Pods in your cluster that are handling DNS are in a different Namespace or using a different service, you may need to adjust the NetworkPolicy accordingly.
  • example.com:80 — for retrieving HTML. As of when this article was written, this seems to reliably resolve to 93.184.216.34.
  • kubernetes.default.svc.cluster.local:443 — for updating ConfigMaps. This typically seems to resolve to typically 10.96.0.1, however it may vary depending on your specific cluster configuration and there is a further gotcha that we’ll look at shortly…

Based on the above, you could not be faulted for creating the following NetworkPolicies:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-allpods-to-dns
spec:
policyTypes:
- Egress
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app=kube-dns
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-cronjob-to-examplecom
spec:
podSelector:
matchLabels:
app.kubernetes.io/component: cronjob
policyTypes:
- Egress
egress:
- to:
# example.com
- ipBlock:
cidr: 93.184.216.34/32
ports:
- protocol: TCP
port: 80
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-cronjob-to-apiserver
spec:
podSelector:
matchLabels:
app.kubernetes.io/component: cronjob
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.96.0.1/32

ports:
- protocol: TCP
port: 443

Note: If you are using Kubernetes <v1.21, you may need to apply the kubernetes.io/metadata.name: kube-system label to the kube-system Namespace to allow the allow-allpods-to-dns NetworkPolicy to function. You can do that like this:

kubectl label namespaces kube-system kubernetes.io/metadata.name=kube-system

Having implemented these NetworkPolicies, you would have found that it still didn’t work — the kubectl apply command still times out. Unfortunately, there are one or more “hidden” IP address that seem to come into play. You can determine what these additional IP addresses are by running the following command:

>kubectl get endpoints --namespace default kubernetes
NAME ENDPOINTS AGE
kubernetes 10.1.1.1:12388,10.1.1.2:12388,10.1.1.3:12388 688d

Based on the endpoints shown above, the allow-cronjob-to-apiserver NetworkPolicy needs to be defined as follows:

apiVersion: networking.k8s.io/v1 
kind: NetworkPolicy
metadata:
name: allow-cronjob-to-apiserver
spec:
podSelector:
matchLabels:
app.kubernetes.io/component: cronjob
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.96.0.1/32
- ipBlock:
cidr: 10.1.1.1/32
- ipBlock:
cidr: 10.1.1.2/32
- ipBlock:
cidr: 10.1.1.3/32

ports:
- protocol: TCP
port: 443

Note: Different clusters will have different endpoints, and so you may need to patch in the appropriate ipBlock entries depending on the cluster you are deploying to

--

--

Paul Dally

AVP, IT Foundation Platforms Architecture at Sun Life Financial. Views & opinions expressed are my own, not necessarily those of Sun Life