Exposing services in Kubernetes, easy!#
Exposing services in Kubernetes might seem relatively simple at first glance… After all, you just need to use an Ingress
object and you’re done, right?
In practice, this is indeed what people do daily. You deploy an Ingress Controller like NGINX, Traefik, Contour or even Istio (the list is obviously non-exhaustive), then you deploy these few lines:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: filador
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: blog
port:
number: 8080
And our blog is exposed for everyone to see! (not forgetting the DNS record that goes with it…)
The Ingress Controller is therefore one of the almost mandatory components to deploy within Kubernetes. Indeed, this component acting as an HTTP and/or HTTPS reverse proxy is not provided in the base Kubernetes called vanilla.
These various solutions have shown their effectiveness, yet beneath their seeming simplicity exists a more complex reality…
The few issues with Ingress#
The Kubernetes Ingress
object, although essential, quickly shows its limits when you try to go beyond its basic functionalities.
The annotation problem#
Each Ingress Controller implements its own annotations to extend the capabilities of the standard Ingress
object. Result? Your configuration becomes intimately tied to the chosen tool:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
# NGINX-specific annotations
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
# Or Traefik-specific ones
traefik.ingress.kubernetes.io/router.middlewares: auth@file
[...]
This approach makes things complicated if you want to change products. In reality, the Ingress
object is a basic reverse proxy drastically enriched by these famous annotations without modifying the Kubernetes object API itself. This means that the choice of an Ingress Controller is usually guided by the features it offers rather than any other factor.
Some have gone even further in this approach, going as far as creating a separate object entirely, this is the case with Traefik (I’ve already talked about it in a previous post) enabling the use of IngressRoute
to avoid relying on this kind of mechanism and making the configuration more readable with a Traefik-style of course!
TLS management: who does what?#
Here’s another important point…
The management of the TLS certificate carried by the Ingress
itself:
tls:
- hosts:
- blog.filador.ch
secretName: mon-super-certificat-tls
However, this object is often in the hands of teams deploying their applications within the cluster, while the certificate is rather a critical component typically deployed by more “infra” teams.
Yes, Ingress Controllers often let you set a default certificate but this is not necessarily sufficient and automatic certificate generation with tools like cert-manager does not meet all requirements either.
Beyond HTTP and HTTPS#
The Ingress
object was initially designed for HTTP and HTTPS protocols. But what to do when your applications use other protocols? Such as TCP, UDP, gRPC and so on…
Yes, traditionally you would use Service
s of type LoadBalancer
but this is clearly not practical particularly in cloud environments where expenses can escalate rapidly…
Limited observability#
Each Ingress Controller exposes its own metrics in different formats. NGINX will have its specific metrics, Traefik its own, Istio still others… Difficult to have a unified view of your traffic behavior when you’re juggling between multiple tools!
Here too, OpenTelemetry is becoming the standard on the observability side and it’s not always easy to connect a collector URL to retrieve metrics, traces and logs through the Ingress Controller configuration.
The limit of basic routing#
Ingress
uses relatively simple routing based on hostname and path. But what if you want to route based on a specific HTTP header? Implement canary deployment with 90/10 traffic distribution? Again, back to specific annotations…
This is why these few limitations have pushed the Kubernetes community to rethink service exposition. This is how the Gateway API was born.
Gateway API, what is it?#
Project origins#
The Gateway API story begins in 2019 with Kubernetes’ SIG Network.
The project, initially named Service APIs, had a clear objective: create a modern, extensible and standardised API for service exposition but also with the aim of going further by integrating Load Balancing and Service Mesh in the process. This project had to overcome the fragmentation imposed by the Ingress
on the functionality side.
It was in 2021 that the project took the name Gateway API and began to gain maturity with the first implementations within some tools like Istio.
The beginning of the end for Ingress?#
Not at all! It is not planned to deprecate Ingress
objects and Ingress Controllers. As indicated in this link, the Gateway API is intended to be an evolution, not a replacement.
In my opinion, the project will need to reach a certain level of maturity before it can fully meet 100% of the needs addressed by the traditional method of exposing services in Kubernetes. At this point, there will no longer be any reason for the two to coexist.
It should be noted that some projects have already taken the lead, such as ingress-nginx, which will no longer offer further development for the latter. However, it will still be maintained for a period of two years to facilitate migration to InGate, to implement the Gateway API.
Basic concepts#
Different objects for different responsibilities#
The Gateway API introduces a completely different approach to Ingress with several specialised objects that complement each other.
One of the major innovations of Gateway API is its clear separation of responsibilities through these objects:
GatewayClass
: Deployed and configured by the Cloud provider or tool implementing Gateway API. This object defines the type of available gateway with its capabilities and limitations.
On Google Cloud, for example, there are different GatewayClass
options that allow you to select the Load Balancer that will be used in the background.
Gateway
: Instantiated during cluster configuration by a Ops team, based on an existingGatewayClass
. It defines the listeners (ports, protocols, TLS certificates) and configurations for attaching routes to it.HTTPRoute
,GRPCRoute
,TCPRoute
, and so on: Created by development teams to expose their services externally according to the protocol type. They define routing rules, backends and other functionalities without worrying about the infrastructure defined above.
This separation gives each team the opportunity to focus on their area of expertise without compromising the security of external application exposure.
A need? A Kubernetes object!#
Unlike Ingress
allowing only HTTP and HTTPS exposition, meaning layer 7 of the OSI model, the Gateway API offers a specialised object for each protocol depending on requirements:
HTTPRoute
: Classic HTTP routing with advanced support for headers, query parameters, methods, etc.;GRPCRoute
: Routing specially designed for gRPC with service and method support;TCPRoute
: TCP level routing on layer 4 for non-HTTP applications;UDPRoute
: UDP application support;TLSRoute
: It allows routing traffic based on TLS metadata, notably SNI (Server Name Indication).
Multi-gateways here I come!#
One of the advantages of the Gateway API is its flexibility in managing multiple entry points. The same HTTPRoute
can easily be attached to multiple Gateways
:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: multi-gateways
spec:
parentRefs:
- kind: Gateway
name: internal-gateway # Internal Gateway
- kind: Gateway
name: external-gateway # External Gateway
rules:
- matches:
- path:
type: PathPrefix
value: /api
backendRefs:
- name: api-service
port: 8080
This approach highlights that it is possible to deploy the same application across multiple environments or networks without duplicating the configuration. This is useful for hybrid or multi-cloud Kubernetes architectures.
Security at the heart of the project#
Gateway API was designed with security in mind, allowing routes to be configured and optionally linked to the Gateway
.
By default, an HTTPRoute
can only bind to a Gateway
in the same namespace. To allow cross-namespace bindings, you must explicitly grant permission in the Gateway
:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: shared-gateway
spec:
listeners:
- name: http
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: Selector
selector:
matchLabels:
gateway-access: "allowed"
In this matchLabels
, it is possible to add various labels. These labels must be present on the namespaces where the different routes are located so that they can be linked to the Gateway
.
To bypass this behavior, it’s possible to authorise all namespaces by default:
[...]
allowedRoutes:
namespaces:
from: All
Another point, when an HTTPRoute
wants to access to a service in another namespace, it must obtain authorisation via an object called ReferenceGrant
:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: blog
namespace: external-exposure
spec:
rules:
- matches:
- path: /
backendRefs:
- name: blog
namespace: blog
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
name: blog-access
namespace: blog
spec:
from:
- group: gateway.networking.k8s.io
kind: HTTPRoute
namespace: external-exposure
to:
- group: ""
kind: Service
This can be convenient if you want to centralise the HTTPRoute
rather than having them live in the application’s namespace.
The same logic applies to TLS certificates! If your Gateway
wants to use a Secret
stored in another namespace (for example, managed by cert-manager), you also need a ReferenceGrant
:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
name: allow-tls-secret-access
namespace: cert-manager
spec:
from:
- group: gateway.networking.k8s.io
kind: Gateway
namespace: gateway-system
to:
- group: ""
kind: Secret
name: wildcard-tls
This approach prevents unauthorised access while maintaining the flexibility required for more complex organisations. It provides more granular control between objects without having to manage everything within the same namespace.
Integrated Load Balancing#
One of the features offered by the Gateway API is its native load balancing management, enabling traffic distribution across multiple backends. That’s right! It is possible to specify multiple backends for a defined rule.
The Gateway API can distribute traffic between multiple services with different weights, ideal for canary deployments or A/B testing:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: canary-deployment
spec:
rules:
- matches:
- path:
type: PathPrefix
value: /api
backendRefs:
- name: api-v1
port: 8080
weight: 90 # 90% of traffic to v1
- name: api-v2
port: 8080
weight: 10 # 10% of traffic to v2
And that’s not all!#
As you’ll have understood, this API is rich, very rich. It would therefore be difficult to list all the functionalities offered.
However, filters for redirections are supported, as well as the possibility to modify headers, not to mention the possibility of doing traffic mirroring…
Gateway API strengths#
To summarise what is enumerated above, here are the arguments for migrating to Gateway API:
Portability first#
No more annotations! The Gateway API defines a common standard. Your HTTPRoute
s work just as well with Istio as they do with NGINX or Traefik.
A multitude of possibilities#
Routing by headers, canary deployment with traffic distribution, TCP/UDP/gRPC support… All of the features requested by the community are now covered or will be in future versions.
Security by design#
Clear role separation, ReferenceGrant
for cross-namespace access, fine permission control. Everything is done to authorise access between Gateway API components on a drop-by-drop basis.
Massive adoption#
Istio, NGINX, Traefik, Kong, as well as Google Cloud, AWS, and Azure, offer their own GatewayClass
. The Gateway API is becoming increasingly common, but community Helm charts have yet to adopt it widely in favour of the traditional Ingress
.
What version 1.3 allows doing…#
Version 1.3 released in April 2024 consolidates Gateway API with an increasingly mature ecosystem. Some concepts are still in Experimental mode, and others progress in Standard.
In short, the Standard Channel includes stable, production-ready features, while the Experimental Channel includes new features that are still in development and may change. For example, some APIs may undergo breaking changes, which means these features should be used with caution.
This version 1.3 introduces some interesting features: the percentage-based request mirroring allowing duplicating a fraction of traffic to another backend, CORS filters for configuring cross-origin resource sharing, and XListenerSets to decouple listeners and the Gateway
object. Also worth noting are retry budgets for intelligently limiting query attempts and avoiding overload.
To track the implementation of each version, the project has set up a page to track feature adoption. Istio and Cilium are the most advanced, with fairly complete support. The others are making progress in adopting this standard on a daily basis.
Finally, the list of improvement proposals can be found here in the Gateway Enhancement Proposal. This allows you to track proposals that have been rejected, are under consideration, or are still being implemented, if you feel like contributing!
A few words to conclude#
Gateway API represents the natural evolution of service exposition in Kubernetes. Goodbye annotations, hello unified and extensible standard!
What’s more, the project is progressing rapidly thanks to investment from major players in the Kubernetes ecosystem and the Cloud Native Computing Foundation.
So take the time to study and deploy this concept,as each new release expands its capabilities, paving the way for future adoption even in production environments!