Authors: Andrew Sy Kim (Google)
Kubernetes v1.26 includes significant advancements in network traffic engineering with the graduation of
two features (Service internal traffic policy support, and EndpointSlice terminating conditions) to GA,
and a third feature (Proxy terminating endpoints) to beta. The combination of these enhancements aims
to address short-comings in traffic engineering that people face today, and unlock new capabilities for the future.
Traffic Loss from Load Balancers During Rolling Updates
Prior to Kubernetes v1.26, clusters could experience loss of traffic
from Service load balancers during rolling updates when setting the
externalTrafficPolicy field to
There are a lot of moving parts at play here so a quick overview of how Kubernetes manages load balancers might help!
In Kubernetes, you can create a Service with
type: LoadBalancer to expose an application externally with a load balancer.
The load balancer implementation varies between clusters and platforms, but the Service provides a generic abstraction
representing the load balancer that is consistent across all Kubernetes installations.
apiVersion: v1 kind: Service metadata: name: my-service spec: selector: app.kubernetes.io/name: my-app ports: - protocol: TCP port: 80 targetPort: 9376 type: LoadBalancer
Under the hood, Kubernetes allocates a NodePort for the Service, which is then used by kube-proxy to provide a
network data path from the NodePort to the Pod. A controller will then add all available Nodes in the cluster
to the load balancer’s backend pool, using the designated NodePort for the Service as the backend target port.
Oftentimes it is beneficial to set
externalTrafficPolicy: Local for Services, to avoid extra hops between
Nodes that are not running healthy Pods backing that Service. When using
an additional NodePort is allocated for health checking purposes, such that Nodes that do not contain healthy
Pods are excluded from the backend pool for a load balancer.
One such scenario where traffic can be lost is when a Node loses all Pods for a Service,
but the external load balancer has not probed the health check NodePort yet. The likelihood of this situation
is largely dependent on the health checking interval configured on the load balancer. The larger the interval,
the more likely this will happen, since the load balancer will continue to send traffic to a node
even after kube-proxy has removed forwarding rules for that Service. This also occurrs when Pods start terminating
during rolling updates. Since Kubernetes does not consider terminating Pods as “Ready”, traffic can be loss
when there are only terminating Pods on any given Node during a rolling update.
Starting in Kubernetes v1.26, kube-proxy enables the
ProxyTerminatingEndpoints feature by default, which
adds automatic failover and routing to terminating endpoints in scenarios where the traffic would otherwise
be dropped. More specifically, when there is a rolling update and a Node only contains terminating Pods,
kube-proxy will route traffic to the terminating Pods based on their readiness. In addition, kube-proxy will
actively fail the health check NodePort if there are only terminating Pods available. By doing so,
kube-proxy alerts the external load balancer that new connections should not be sent to that Node but will
gracefully handle requests for existing connections.
In order to support this new capability in kube-proxy, the EndpointSlice API introduced new conditions for endpoints:
serving condition is semantically identical to
ready, except that it can be
while a Pod is terminating, unlike
ready which will always be
false for terminating Pods for compatibility reasons.
terminating condition is true for Pods undergoing termination (non-empty deletionTimestamp), false otherwise.
The addition of these two conditions enables consumers of this API to understand Pod states that were previously not possible.
For example, we can now track “ready” and “not ready” Pods that are also terminating.
Consumers of the EndpointSlice API, such as Kube-proxy and Ingress Controllers, can now use these conditions to coordinate connection draining
events, by continuing to forward traffic for existing connections but rerouting new connections to other non-terminating endpoints.
Optimizing Internal Node-Local Traffic
Similar to how Services can set
externalTrafficPolicy: Local to avoid extra hops for externally sourced traffic, Kubernetes
internalTrafficPolicy: Local, to enable the same optimization for traffic originating within the cluster, specifically
for traffic using the Service Cluster IP as the destination address. This feature graduated to Beta in Kubernetes v1.24 and is graduating to GA in v1.26.
Services default the
internalTrafficPolicy field to
Cluster, where traffic is randomly distributed to all endpoints.
internalTrafficPolicy is set to
Local, kube-proxy will forward internal traffic for a Service only if there is an available endpoint
that is local to the same Node.
internalTrafficPoliy: Local, traffic will be dropped by kube-proxy when no local endpoints are available.
If you’re interested in future discussions on Kubernetes traffic engineering, you can get involved in SIG Network through the following ways:
Originally posted on Kubernetes – Production-Grade Container Orchestration