What is Ingress in Kubernetes?
Ingress -- an L7 (HTTP) router. Unlike Service (L4, IP:port), Ingress routes by domains, URL paths, headers. One Ingress = many services.
Junior Level
Simple Definition
Ingress is a Kubernetes API object that manages external HTTP/HTTPS traffic, routing it to the appropriate services inside the cluster. It is a “single entry point” — one address for the entire cluster, and Ingress decides which service gets each request.
Ingress – an L7 (HTTP) router. Unlike Service (L4, IP:port), Ingress routes by domains, URL paths, headers. One Ingress = many services.
Analogy
Ingress is like a reception desk in a large office building. All calls go through one number. The receptionist asks: “Accounting?” — directs to the 3rd floor. “IT?” — to the 5th floor. One phone number, many destinations.
YAML Example
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: api.example.com
http:
paths:
- path: /v1
pathType: Prefix
backend:
service:
name: api-v1
port:
number: 8080
- path: /v2
pathType: Prefix
backend:
service:
name: api-v2
port:
number: 8080
tls:
- hosts:
- api.example.com
secretName: api-tls-secret
kubectl Example
# List all Ingress resources
kubectl get ingress
# Detailed information
kubectl describe ingress my-ingress
# Check if Ingress Controller is running
kubectl get pods -n ingress-nginx
When to Use
- Multiple services accessible from outside the cluster via a single IP
- Need routing by domain names (api.example.com, web.example.com)
- Need routing by paths (/v1, /v2, /static)
- Centralized SSL/TLS certificate management
Middle Level
How it Works
Ingress consists of two separate components:
-
Ingress Resource — YAML manifest with routing rules (hosts, paths, TLS). This is just a record in etcd; by itself, it does nothing.
-
Ingress Controller — the actual application (Nginx, HAProxy, Traefik, Envoy) that reads Ingress Resources from the API Server and configures itself as a reverse proxy. Without a controller, the Ingress Resource is useless.
Ingress resource – routing rules only. Ingress Controller – the process (nginx, Traefik, HAProxy) that applies these rules. Without Controller, Ingress is useless.
Request chain:
Client → Cloud LoadBalancer (external IP) → Ingress Controller Pod → Service → Backend Pod
The Ingress Controller typically runs behind a LoadBalancer or NodePort Service. It gets an external IP from the cloud provider and accepts all incoming traffic.
Practical Scenarios
Scenario 1: Path-based routing for microservices
rules:
- host: api.example.com
http:
paths:
- path: /users
pathType: Prefix
backend:
service:
name: user-service
port:
number: 8080
- path: /orders
pathType: Prefix
backend:
service:
name: order-service
port:
number: 8080
Scenario 2: TLS with Cert-Manager (automatic Let’s Encrypt certificates)
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: api-tls
spec:
secretName: api-tls-secret
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
dnsNames:
- api.example.com
Scenario 3: Canary deployment via Ingress
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "10"
10% of traffic goes to the canary version, 90% — to the stable one.
Common Mistakes Table
| Mistake | Consequence | Solution |
|---|---|---|
| Creating Ingress without Ingress Controller | Ingress Resource exists but traffic doesn’t flow | Install nginx-ingress, traefik or another controller |
| Forgotten ingressClassName | Uses default controller or errors | Always specify ingressClassName: nginx |
| Wrong pathType | Exact doesn’t match /v1/extra, Prefix matches too much |
Use Prefix with rewrite-target or ImplementationSpecific |
| TLS Secret doesn’t exist | Ingress works but without HTTPS | Create Secret with tls.crt and tls.key before creating Ingress |
| Single controller for entire cluster — single point of failure | If controller goes down, all external traffic is unavailable | Run controller as DaemonSet or replicas with anti-affinity |
| Controller annotations not documented | Hard to maintain, unknown what each annotation does | Document annotations, use IngressClass with parameters |
Comparison: Ingress vs LoadBalancer vs NodePort
| Characteristic | NodePort | LoadBalancer | Ingress |
|---|---|---|---|
| OSI Level | L4 (TCP/UDP) | L4 (TCP/UDP) | L7 (HTTP/HTTPS) |
| Cost | Free | $15-50/mo per LB | $15-50/mo for ONE LB for all services |
| Routing | By port | By port | By host, path, headers |
| TLS | No (at Pod level) | No (at Pod level) | Centralized |
| Scalability | One service per port | One service per LB | Hundreds of services on one LB |
| Rate Limiting | No | Depends on cloud | Yes (via annotations) |
| WebSocket | Yes | Yes | Yes (with configuration) |
When NOT to Use
- Non-HTTP TCP/UDP traffic — Ingress only works at L7. For TCP/UDP, use LoadBalancer Service or TCP/UDP ConfigMap in nginx-ingress
- Single service — if you have only one API, using a LoadBalancer Service is simpler
- Need L4 load balancer with TLS termination — use Gateway API (the next generation of Ingress) or Service Mesh
- Internal communication between microservices — use regular Services for this, not Ingress
Senior Level
Deep Mechanics: Ingress Controller, Nginx, and Controller Reconciliation
Ingress Controller Architecture: Ingress Controller is a Kubernetes Controller that follows the standard reconciliation pattern:
1. Watch: Subscribes to Ingress Resources via API Server (watcher)
2. List: Gets full list of Ingress, Services, Endpoints, Secrets
3. Reconcile: Compares desired state (Ingress YAML) with current (nginx.conf)
4. Update: Generates new nginx.conf and runs `nginx -s reload`
5. Loop: Repeats on every change (via Informer)
Nginx Ingress Controller (most popular):
- Uses Go library
k8s.io/client-gofor informers - On Ingress/Service/Endpoint change, regenerates
nginx.confvia Go templates - Executes
nginx -s reload(graceful reload, no downtime) - With many Ingress resources (1000+), nginx reload becomes a bottleneck — nginx must reload configuration, taking 50-200ms
Contour (Envoy-based):
- Uses Envoy Proxy instead of nginx
- Envoy uses xDS API for dynamic configuration — no reload
- With 1000+ Ingress, Contour is more stable since there’s no reload overhead
Gateway API (the future of Ingress):
- More expressive API: TCP/UDP support, gRPC, header-based routing
- Role-based: separate resources for Gateway (infra team) and HTTPRoute (dev team)
- Gradually replacing Ingress, but not all controllers support it yet
Gateway API – GA since K8s v1.28. More flexible, supports TCP/UDP, not just HTTP. Gradually replacing Ingress.
Trade-offs
| Aspect | Trade-off |
|---|---|
| Nginx vs Envoy | Nginx is simpler and more familiar, but reload on large configs = latency spike. Envoy has no reload but is harder to operate |
| Single vs multiple controllers | One controller is cheaper and simpler, but single point of failure. Multiple controllers = team isolation but more expensive |
| Ingress vs Gateway API | Ingress is mature and supported everywhere. Gateway API is more expressive but less mature |
| Annotations vs CRD | Annotations are simpler but not validated. CRD (Kong, Gloo) provide validation and documentation but require CRD installation |
| TLS termination: Ingress vs Pod | Termination on Ingress = centralized management, but Ingress sees all traffic. Termination in Pod = end-to-end TLS but harder certificate management |
Edge Cases (6+)
Edge Case 1: Ingress Controller reload storm With 1000+ Ingress Resources and frequent deployments (100+ per day), the nginx controller constantly regenerates configuration. Each reload = 50-200ms latency spike for active connections. Solution: batch updates, or switch to Envoy-based controller (Contour, Istio Gateway).
Edge Case 2: Sticky Sessions with IPVS Nginx Ingress supports sticky sessions via cookie. But if kube-proxy runs in IPVS mode instead of iptables, load balancing at the Service level may conflict with nginx sticky sessions. A client with a sticky cookie reaches the right Pod via nginx, but nginx forwards to Service IP, and IPVS may redirect to a different Pod.
Edge Case 3: TLS Secret in a Different Namespace
Ingress Resource can only reference a TLS Secret in its own namespace. If team A manages Ingress in namespace prod, and certificates are stored in namespace cert-manager, a Secret copying mechanism is needed (e.g., cert-manager + secretTemplate or external-secrets operator).
Edge Case 4: WebSocket through Ingress WebSocket requires HTTP → WS connection upgrade. Nginx Ingress closes idle connections after 60 seconds by default. For long-lived WebSocket connections:
annotations:
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
Edge Case 5: Ingress and NetworkPolicy
The Ingress Controller must have access to all Backend Pods. If you configure a NetworkPolicy blocking incoming traffic to the Backend namespace, the Ingress Controller won’t be able to forward requests. Solution: NetworkPolicy must allow traffic from the ingress-nginx namespace.
Edge Case 6: Multiple Ingress Controllers
A cluster can have multiple Ingress Controllers (nginx for external services, istio for internal). Ingress Resource must explicitly specify ingressClassName. If not specified — the controller with annotation ingressclass.kubernetes.io/is-default-class is used. Controller conflicts = unpredictable routing.
Edge Case 7: Ingress with Headless Service
Ingress requires a Service with ClusterIP. Headless Service (clusterIP: None) is not supported as an Ingress backend — nginx cannot resolve headless Services into upstreams. Solution: create a separate ClusterIP Service for Ingress.
Performance Numbers
| Metric | Value |
|---|---|
| Nginx reload latency | 50-200ms (depends on config size) |
| Envoy xDS update latency | 1-10ms (no reload) |
| Ingress Controller→Backend latency | 1-5ms (within cluster) |
| Max Ingress Resources per Controller | ~1000-2000 (nginx), ~5000+ (Envoy) |
| TLS handshake overhead | 5-15ms (TLS 1.3), 15-30ms (TLS 1.2) |
| Rate limiting overhead | 0.1-1ms per request (nginx limit_req) |
| Cloud LoadBalancer cost | $15-50/mo (one LB for entire cluster) |
Security
- TLS termination on Ingress — traffic between Ingress and Pods goes unencrypted. For compliance (PCI-DSS, HIPAA), end-to-end TLS is needed: Ingress → Pod via HTTPS with internal cert
- Rate Limiting — protect backends from DDoS via annotations:
nginx.ingress.kubernetes.io/limit-rps: "100" nginx.ingress.kubernetes.io/limit-connections: "50" - WAF (Web Application Firewall) — nginx-ingress supports ModSecurity for OWASP Top 10 protection:
nginx.ingress.kubernetes.io/enable-modsecurity: "true" nginx.ingress.kubernetes.io/modsecurity-snippet: | SecRuleEngine On - IP Whitelisting — restrict access to internal APIs:
nginx.ingress.kubernetes.io/whitelist-source-range: "10.0.0.0/8,172.16.0.0/12" - HSTS headers — enforce HTTPS:
nginx.ingress.kubernetes.io/use-hsts: "true" nginx.ingress.kubernetes.io/hsts-max-age: "31536000" - Ingress Controller in separate namespace — run in
ingress-nginxwith limited RBAC and NetworkPolicy
Production War Story
Situation: SaaS platform, 200 microservices, one Nginx Ingress Controller for the entire cluster (replica 2). During Black Friday, deployments increased from 50 to 500 per day. Each deployment created/updated an Ingress Resource, and the nginx controller regenerated configuration.
Problem:
- Nginx reload occurred every 2-3 seconds
- Each reload caused 100-200ms latency spike for 30% of active connections
- p99 latency grew from 200ms to 2 seconds
- Clients started getting timeouts
- HPA scaled Pods, creating even more Ingress updates → feedback loop
Post-mortem and fix:
- Split Ingress Controllers by criticality:
ingress-external(public APIs),ingress-internal(B2B services) - Migrated to Contour (Envoy-based) — xDS updates without reload, latency spikes eliminated
- Implemented
batchingIngress updates — controller waits 5 seconds before reload, collecting batched changes - Limited deployments per hour via CI/CD pipeline throttling
- Added monitoring for
nginx_ingress_controller_reload_duration_seconds
Monitoring after fix:
# Alert: Nginx reload latency growing
histogram_quantile(0.99, nginx_ingress_controller_config_last_reload_duration_seconds_bucket) > 0.1
# Alert: Ingress Controller pod restart
rate(kube_pod_container_status_restarts_total{namespace="ingress-nginx"}[5m]) > 0
# Alert: 5xx errors on Ingress
rate(nginx_ingress_controller_requests{status=~"5.."}[5m]) > 10
Monitoring (Prometheus/Grafana)
Key metrics:
# Request rate by status
rate(nginx_ingress_controller_requests[5m])
# Latency p50/p95/p99
histogram_quantile(0.99, nginx_ingress_controller_request_duration_seconds_bucket)
# 5xx error rate
sum(rate(nginx_ingress_controller_requests{status=~"5.."}[5m])) by (ingress)
# Ingress Controller reload latency
histogram_quantile(0.99, nginx_ingress_controller_config_last_reload_duration_seconds_bucket)
# Open connections (for WebSocket/long-polling)
nginx_ingress_controller_nginx_metric_connections{state="active"}
# Rate limiting events
rate(nginx_ingress_controller_requests{status="429"}[5m])
Grafana Dashboard panels:
- Request rate (RPS) by Ingress/Service — green/red line by status
- Latency p50/p95/p99 — correlation with deployments
- 5xx error rate — alert at > 1% of total traffic
- Nginx reload count and duration — reload storm detection
- Active connections — WebSocket/long-polling monitoring
- Rate limiting events — DDoS or misconfigured limits detection
Highload Best Practices
- Use Envoy-based controller (Contour, Istio Gateway) with 1000+ Ingress — nginx reload latency becomes critical
- Separate Ingress Controllers by criticality — public APIs and internal services on different controllers
- Configure rate limiting —
limit-rps,limit-connectionsfor DDoS and abuse protection - Use Cert-Manager for TLS automation — manual certificate management doesn’t scale
- Batch Ingress updates in CI/CD — don’t create 50 Ingress at once, group deployments
- Monitor reload duration — alert at p99 > 100ms
- Anti-affinity for Ingress Controller Pods — spread replicas across nodes:
affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchLabels: app: ingress-nginx topologyKey: kubernetes.io/hostname - Use Gateway API for new projects — Ingress is gradually deprecating, Gateway API is the future
- End-to-end TLS for compliance — internal mTLS between Ingress and Pods via Service Mesh or cert-manager internal CA
- Canary via Ingress annotations —
canary: "true"+canary-weight: "10"for safe deployment
Interview Cheat Sheet
Must know:
- Ingress — L7 (HTTP/HTTPS) routing: by domain, path, headers; one IP for many services
- Ingress Resource (YAML rules) ≠ Ingress Controller (nginx, Traefik, Envoy — actual proxy)
- Without Controller, Ingress Resource is useless — just a record in etcd
- Ingress saves money: one LoadBalancer for all HTTP services instead of LB per service
- TLS termination centralized on Ingress; Cert-Manager for auto-certificates (Let’s Encrypt)
- Nginx reload (50-200ms) with 1000+ Ingress → latency spike; Envoy (xDS) no reload
- Gateway API (GA since K8s v1.28) — the future; supports TCP/UDP, not just HTTP
Common follow-up questions:
- “Does Ingress work without Controller?” — No, Controller is the actual process (nginx/Envoy) that reads rules
- “Ingress vs LoadBalancer?” — Ingress = L7 HTTP routing, LoadBalancer = L4 TCP/UDP
- “Nginx vs Envoy Controller?” — Nginx is simpler but has reload overhead; Envoy has no reload but is more complex
- “WebSocket through Ingress?” — Needs special annotations (proxy-read-timeout: 3600), otherwise 60s timeout
Red flags (DO NOT say):
- “Ingress is a load balancer” (it’s an L7 router; Service does load balancing)
- “Ingress works without Controller” (Controller is mandatory)
- “Ingress for TCP/UDP” (only HTTP/HTTPS; for TCP — LoadBalancer Service)
- “One Controller for entire cluster — OK” (single point of failure; need replicas)
Related topics:
- [[What Service types exist (ClusterIP, NodePort, LoadBalancer)]] — L4 vs L7
- [[What is Service in Kubernetes]] — Service vs Ingress
- [[How to organize rolling update in Kubernetes]] — canary via Ingress