Debugging Kubernetes Ingress Objects

Ingress icon: , beetle:

You’ve deployed an Ingress object, and it doesn’t seem to be working. Now what?

Is the container healthy?

The underlying container needs to be healthy for the Ingress to work. Do a kubectl describe pod … to see if the Pod/containers are healthy. If the Pod is pending, look at the nodeSelectors, affinity clauses, LimitRange, readiness and liveness probes, etc. This isn’t a problem with the Ingress though, it is a problem with the Pod.

Does the Ingress point to the correct service and port?

Is the Ingress pointing to the intended Service and port? Does the Service even exist?

Is the service pointing to the right Pods?

Ingress objects specify a Service to define which Pods the controller should direct the traffic to. Does the Service’s selector match the labels on the desired Pods?

Are you using the correct DNS name corresponding to your Ingress class (or potentially the Ingress host value)?

Sometimes Ingress objects specify a host name. If they don’t, it may be the case that a default host name is being used. Either way, are you hitting the right DNS name based on the configuration of your Ingress object?

Have you checked the Ingress logs for errors?

Sometimes an Ingress object can’t be deployed by the Ingress controller (for example, perhaps you are specifying a configuration-snippet or some other annotation that is syntactically invalid). The logs of the Ingress controller will often provide the required information to determine if this is occurring.

Are the Ingress rewrite-target and path consistent with the path expected in the Pod?

For example, suppose you are using Nginx Ingress and issue a request to , and the Pod expects the request to be .

If the Ingress path is /some/path/(.*) and your annotation is /$1, then the actual request that the Ingress controller will be making to the pod is , which is not the path that the Pod expects. It may not matter that much what you change, but the URL, Ingress path and rewrite-target annotations should be consistent with each other.

Is there NetworkPolicy blocking connectivity from the Ingress to the Pod?

If you have ingress NetworkPolicy on the Namespace containing the Pods, or egress NetworkPolicy on the Namespace containing the ingress controller, your traffic might be blocked as a result. See the following for information on debugging NetworkPolicy:




Distinguished Architect at Sun Life Financial. Focused on containers & Kubernetes. Views & opinions expressed here are my own, not necessarily those of Sun Life

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Step by step guide for Permissionless access on UniLend Protocol

My Experience with Multistage Docker-in-Docker on Concourse

Let’s Hit the Ground Running with Health Nexus

Complete guide to CDAC CCAT

Azure Firewall with Custom DNS and DNS Proxy

Web Scraping in Python using Beautiful Soup, Requests, Selenium

Front End Development? A lot tougher than Back End Development.

Solana India Fellowship — Week 2

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Paul Dally

Paul Dally

Distinguished Architect at Sun Life Financial. Focused on containers & Kubernetes. Views & opinions expressed here are my own, not necessarily those of Sun Life

More from Medium

My Kubernetes Application Is Not Seeing ConfigMap or Secret Updates

Top Six Kubernetes Best Practices for Fleet Management

Kubernetes SSO using Keycloak

Kubernetes Gateway API — Evolution of Service Networking