Ingress NGINX Retirement: What You Need to Know Before March 2026
The Kubernetes Ingress NGINX controller reaches end-of-life in March 2026. Learn what this means for your clusters, migration options, and which alternatives to consider.
No more security patches. No bug fixes. No releases. After years of warnings, the clock is finally running out.
This affects roughly 40-50% of internet-facing Kubernetes clusters globally. If you’re running kubernetes/ingress-nginx in production (and chances are you do), you need to start planning now.
I recently went through this migration myself. Here’s what I learned, including the full backstory, the alternatives, and an honest comparison.
What exactly is happening
This is a complete end-of-life. Not deprecation. Not maintenance mode. Full stop.
On November 11, 2025, Kubernetes SIG Network officially announced the retirement of kubernetes/ingress-nginx. After March 2026, the GitHub repos go read-only. Existing container images and Helm charts stay available, but they’ll receive zero updates.
The timeline tells the whole story:
- August 2021: GitHub Issue #7517 first discussed implementing Gateway API. The writing was on the wall early
- KubeCon NA 2024 (Salt Lake City): Maintainers announced plans to wind down ingress-nginx, introducing InGate as a planned replacement
- March 2025: CVE-2025-1974 hits, dubbed “IngressNightmare”. Unauthenticated remote code execution in the admission controller, allowing complete cluster takeover. Wiz Research found 43% of cloud environments potentially vulnerable with 6,500+ clusters publicly exposed
- November 2025: Both ingress-nginx and InGate declared retired. InGate never reached maturity
Important: This only affects kubernetes/ingress-nginx (the community project). The F5/NGINX Inc. maintained controller (nginxinc/kubernetes-ingress) is still actively supported and offers NGINX Gateway Fabric as their Gateway API implementation.
Why it died
The root causes are painfully common in open source:
- Chronic maintainer shortage: 1-2 volunteer developers working evenings and weekends for years. For thousands of production deployments.
- Technical debt: Features like configuration snippets became serious security liabilities (hello, CVE-2025-1974)
- Failed succession: InGate never attracted sufficient contributors
- Strategic shift: Gateway API is where Kubernetes networking innovation happens now
Gateway API: the official successor
Gateway API reached v1.0 GA in October 2023. The current v1.4.0 release (October 2025) is mature and production-ready.
One thing people get wrong: the Ingress API itself (networking.k8s.io/v1) is NOT deprecated. It’s GA, it’s supported, but it’s feature-frozen. No new capabilities will ever be added.
The Gateway API design separates concerns across three roles:
| Resource | Status | What it does |
|---|---|---|
| GatewayClass | GA | Template defining gateway behavior (like IngressClass) |
| Gateway | GA | Traffic entry points with listeners, protocols, TLS |
| HTTPRoute | GA | HTTP/HTTPS routing rules |
| GRPCRoute | GA (v1.1+) | gRPC-specific routing |
| TCPRoute/UDPRoute | Experimental | Layer 4 routing |
Fully conformant implementations (v1.4.0): Envoy Gateway, Istio, NGINX Gateway Fabric, Traefik Proxy, Cilium, kgateway.
Partially conformant: AWS Load Balancer Controller, Azure Application Gateway, GKE Gateway, Kong, Contour.
There’s an official conversion tool called ingress2gateway from kubernetes-sigs that automates the translation from Ingress resources to Gateway API.
The alternatives: honest comparison
Traefik
Traefik v3.6 is the current stable release. Written in Go (no template injection like nginx), default ingress in K3s and Nutanix NKP, 3.4 billion+ downloads.
The interesting part for migrations: Traefik v3.5+ has a kubernetesIngressNGINX provider that interprets common nginx annotations. This lets you deploy Traefik alongside your existing nginx Ingress resources and migrate incrementally. It’s not magic, and complex nginx-specific configs will still need adaptation, but it lowers the barrier significantly for standard setups.
100% Gateway API conformance
Full v1.4.0 core conformance, fully future-proof.
Native nginx annotation support
Can interpret many nginx annotations directly, reducing migration effort.
Massive community
60,900+ GitHub stars, 850+ contributors, excellent docs.
Native ACME/Let's Encrypt
TLS-ALPN-01, HTTP-01, DNS-01 across 100+ providers.
Lower throughput
~19k req/s vs HAProxy's ~42k in benchmarks.
HA Let's Encrypt is Enterprise
Or use cert-manager, which you probably already do.
No native WAF in OSS
Plugin-based only; Enterprise version is 23x faster.
Some nginx features missing
EWMA load balancing, session caching for forward auth.
Basic migration setup:
helm upgrade --install traefik traefik/traefik \
--namespace traefik --create-namespace \
--set providers.kubernetesIngressNginx.enabled=true
Deploy alongside existing nginx, test thoroughly, then gradually shift traffic. How smooth this goes depends on your annotation complexity. If you’re heavy on nginx-specific features, you’ll still need to adapt some manifests. Traefik provides an official migration guide and a migration CLI tool that can help identify incompatibilities.
HAProxy
Two projects exist here: HAProxy Technologies (haproxytech/kubernetes-ingress, currently v3.2.4, enterprise-backed) and the community version (jcmoraisjr/haproxy-ingress, v0.15.0).
I went with HAProxy for my setup. The basic annotation mapping from nginx is fairly straightforward: swap nginx.ingress.kubernetes.io/ for haproxy.org/ or haproxy-ingress.github.io/. But here’s the thing: your migration complexity depends entirely on how deep you went with nginx-specific features. If you’re using mostly standard Ingress specs with basic annotations, you’ll be fine. If you’ve got server-snippet, configuration-snippet, or custom Lua scripts scattered across your manifests, expect more work.
Unmatched performance
Benchmarks show ~42,000 req/s vs ~19,000 for Traefik. Zero HTTP errors.
Lowest latency
Best p75, p95, p99 percentiles across all tested controllers.
Enterprise WAF included
OWASP CRS WAF out of the box, no extra subscription.
QUIC/HTTP3 native
Full support since HAProxy 2.6+.
Partial Gateway API
Only v1alpha1 with TCPRoute, lags behind Traefik significantly.
Smaller community
~1,100 GitHub stars vs Traefik's 60,900+.
Two competing projects
HAProxy Tech vs community version can be confusing.
Scale considerations
Clusters exceeding 500 backends need --backend-shards.
AWS ALB Controller
AWS Load Balancer Controller v3.0.0 brings Gateway API to GA for AWS environments. It provisions actual AWS ALBs (for HTTPRoute) or NLBs (for TCPRoute), routing traffic directly to pods.
I’ll mention this because for AWS-native shops it makes sense. But let’s be real: you’re trading portability for managed infrastructure.
The good: Native AWS WAF/Shield, Cognito for OIDC auth, AWS Certificate Manager, high uptime SLA.
The bad: Zero portability. Your manifests won’t work on Hetzner, on-prem, or any other cloud. Cost adds up fast too. ALB is ~$16.42/month base + LCU charges. Ten microservices with separate ALBs? ~$214/month just for load balancing.
| Scenario | AWS LBC? | Why |
|---|---|---|
| AWS-only, need WAF/Cognito | Yes | Native integration |
| Multi-cloud / portability needed | No | Not portable at all |
| Cost-sensitive, many services | No | Per-ALB costs add up |
| On-prem or EU cloud | No | AWS-only |
Audit your cluster first
Before choosing a replacement, find out what you’re actually using. Run this to list all nginx-specific annotations across your cluster:
kubectl get ingress -A -o json | \
jq -r '.items[] |
"\(.metadata.namespace)/\(.metadata.name): \(.metadata.annotations | keys | map(select(startswith("nginx"))) | join(", "))"' | \
grep nginx
Or a simpler approach to just see which ingresses use nginx annotations:
kubectl get ingress -A -o yaml | grep -B5 "nginx.ingress"
High-risk annotations (require manual migration work):
# These inject raw nginx config and won't translate to other controllers
nginx.ingress.kubernetes.io/server-snippet
nginx.ingress.kubernetes.io/configuration-snippet
nginx.ingress.kubernetes.io/lua-resty-waf
nginx.ingress.kubernetes.io/stream-snippet
Medium-effort annotations (need equivalent config in new controller):
nginx.ingress.kubernetes.io/auth-url
nginx.ingress.kubernetes.io/auth-signin
nginx.ingress.kubernetes.io/proxy-body-size
nginx.ingress.kubernetes.io/proxy-read-timeout
nginx.ingress.kubernetes.io/ssl-redirect
nginx.ingress.kubernetes.io/use-regex
nginx.ingress.kubernetes.io/rewrite-target
Low-effort annotations (standard Ingress features, usually work out of the box):
nginx.ingress.kubernetes.io/backend-protocol
nginx.ingress.kubernetes.io/ssl-passthrough
nginx.ingress.kubernetes.io/affinity
If your grep output is mostly low/medium-effort annotations, migration will be straightforward. If you see server-snippet or configuration-snippet everywhere, plan for more work.
The big comparison table
| Dimension | HAProxy | Traefik | AWS LBC | NGINX Gateway Fabric |
|---|---|---|---|---|
| Throughput | ~42k req/s (best) | ~19k req/s | N/A (managed) | ~30k req/s |
| Gateway API | Partial (v1alpha1) | Full (v1.4.0) | Full (v1.3.0) | Full (v1.4.1) |
| Migration from nginx | Medium (annotation swap) | Easy (native provider) | N/A | Medium |
| Let’s Encrypt | Via cert-manager | Native ACME | AWS ACM | cert-manager |
| WAF | OWASP CRS (Enterprise) | Plugin / Enterprise | AWS WAF native | NGINX App Protect |
| GitHub stars | ~1,100 | ~60,900 | ~4,000 | ~2,500 |
| Portability | Full | Full | AWS-only | Full |
Performance from HAProxy’s benchmarks (vendor benchmarks, but methodology is open source):
| Controller | Requests/sec | HTTP Errors | Latency |
|---|---|---|---|
| HAProxy | ~42,000 | 0 | Lowest |
| Traefik | ~19,000 | 1,342 | Mid |
| Envoy | ~18,500 | 19 | Low |
| NGINX Inc. | ~15,200 | 0 | Mid |
| NGINX Community | ~11,700 | 25 | Varies |
My take
I ended up with both. HAProxy for a project where raw performance mattered, Traefik for another where the smoother migration path and Gateway API conformance were priorities. So now I maintain two different ingress controllers across projects. Thanks, open source.
The migrations weren’t as bad as I feared, but I also wasn’t heavily invested in nginx-specific annotations. If you’ve kept your Ingress manifests relatively standard, switching controllers is manageable. If you’ve built a house of cards with server-snippet and configuration-snippet annotations, you’re looking at a bigger project.
A few honest observations:
For most teams, Traefik is probably the path of least resistance. The native nginx annotation support means you can run it alongside your existing setup and migrate gradually. Full Gateway API conformance is a bonus. The community is massive, which matters when you hit edge cases.
If throughput is genuinely your bottleneck: HAProxy. The benchmarks don’t lie. But be realistic about whether you actually need 42k req/s. Most applications don’t.
If you’re AWS-only and need native WAF/auth: AWS LBC makes sense. Just accept that you’re locking yourself in.
Regardless of what you pick: Start testing Gateway API now. The Ingress API isn’t going away, but that’s where all the innovation is happening. Your next controller should support it fully.
Migration resources
- Gateway API: Migrating from Ingress
- Gateway API: Migrating from Ingress-NGINX specifically
- Traefik: Migrate from Ingress NGINX Controller
- Traefik: Migration Kit (free tool)
- HAProxy: ingress-nginx migration assistant
- NGINX Gateway Fabric docs
- ingress2gateway CLI tool
Already migrated from ingress-nginx? I’d love to hear what you picked and how it went. Find me on GitHub or check out more Kubernetes cost comparisons at EU Cloud Cost.
Find the Best Kubernetes Pricing
Configure your exact cluster requirements and compare real-time prices across 25+ European providers.
Open Calculator