Connecting the Dots: SNAT, DNAT, and Traffic Flow in Kubernetes

Introduction

While working with Kubernetes, it’s easy to take for granted how smoothly everything runs. Kubernetes abstracts a lot of the technical details, and things just seem to work like magic. But it’s always a good idea to dig deeper and understand what’s happening under the hood, especially when it comes to the basics. I have always been curious about how networking works when a pod needs to communicate with external services like AWS S3 or Google. So, I decided to dig into how Kubernetes handles outgoing communication.

In this blog, I will take you through the process of outgoing communication from a pod, including how traffic reaches external services, and then touch upon how ingress works when external traffic reaches your application. To make things more concrete, I will show examples from my lab, including command-line outputs that demonstrate how breaking specific iptables rules affects the communication.

Environment details

I am using the same lab I have deployed using k3d on my laptop which runs kube-proxy in iptables mode.

➜ kubectl get nodes                
NAME STATUS ROLES AGE VERSION
k3d-trinity-cluster1-agent-0 Ready <none> 25d v1.30.4+k3s1
k3d-trinity-cluster1-agent-1 Ready <none> 25d v1.30.4+k3s1
k3d-trinity-cluster1-server-0 Ready control-plane,master 25d v1.30.4+k3s1

⇡0% ➜ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-pod 1/1 Running 0 61s 10.42.2.28 k3d-trinity-cluster1-server-0 <none> <none>

⇡28% ➜ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-svc ClusterIP 10.43.51.114 <none> 80/TCP 23m

Outgoing Communication from Pods

Imagine you have an application running in a Kubernetes pod that needs to interact with an external service, like Google. The question is, how does the packet travel from the pod to an external destination, especially when your pod has an internal IP that isn’t directly routable on the internet?

Here is a basic flow of how outgoing communication happens from a pod

  • Pod sends a packet: Your pod, with an internal IP (let’s say 10.42.2.28), sends a request to Google.
  • Packet reaches the worker node: The pod’s packet first reaches the Kubernetes worker node where the pod is running.
  • SNAT: Since the pod IP is not routable outside the cluster(it’s part of an internal range), the packet needs to have its source IP changed to something that is routable. This is where SNAT happens. The worker node will replace the pod’s IP with the node’s own external IP.
  • Packet leaves the node: The packet now has a routable IP(the node’s external IP) and is sent to the router, then out to the internet, reaching Google.
  • Response comes back: Google sends a response to the node’s external IP. Kubernetes handles this and forwards the response back to the originating pod.

Outgoing Communication Testing in My Lab

To better understand SNAT, I tested this in my own Kubernetes lab environment. Below is an example of how SNAT works and what happens when the SNAT rule is removed.

Initial Test for Outgoing Communication:

When testing outgoing communication from my pod using a simple curl request to an external service(Google Public DNS 8.8.8.8), everything worked as expected:

root@test-pod:/# curl -L https://8.8.8.8 
<!DOCTYPE html>
<html lang="en"> <head> <title>Google Public DNS</title> ...

This output shows that the pod can successfully reach the external service.

SNAT Rule in iptables:

5    MASQUERADE  0    --  10.42.0.0/16        !224.0.0.0/4          /* flanneld masq */ random-fully

Deleting SNAT Rule:

iptables -t nat -D FLANNEL-POSTRTG 5

After removing the SNAT rule, the outgoing communication broke. Here iss the result of the same curl command after deleting the rule:

root@test-pod:/# curl -L https://8.8.8.8 
curl: (28) Failed to connect to 8.8.8.8 port 443 after 129931 ms: Couldn't connect to server

This confirms that without SNAT, the pod’s outgoing traffic can’t reach external services.

Incoming Communication to Pods

So far, we have talked about how traffic goes out from a pod to external services. But what about when external users want to reach your application?

This is where Ingress comes into play(as I have used ingress in my lab). Ingress allows you to expose your services to the outside world using Ingress Controller, which acts as the gateway for external traffic.

Here is a basic flow of how outgoing communication happens from a pod

  • External traffic arrives at the Ingress LoadBalancer: The user hits the LoadBalancer IP of your Ingress controller. In my cluster, I have exposed a service using an ingress resource.
  • Traffic is forwarded to the appropriate service: Based on the Ingress configuration, the request is routed to the Kubernetes service that backs the application.
  • Service forwards traffic to the pod: Once the traffic reaches the service, it forwards the request to the appropriate pod.
  • DNAT: Just like SNAT changes the source IP of outgoing traffic, DNAT is responsible for changing the destination IP of incoming traffic.

Incoming Communication Testing in My Lab

root@test-pod2:/# curl test-svc.default.svc.cluster.local -I
HTTP/1.1 200 OK

This confirms that the pod could receive incoming requests from the service.

DNAT Rule in iptables:

DNAT       6    --  0.0.0.0/0            0.0.0.0/0            /* default/test-svc */ tcp to:10.42.2.28:80

Deleting DNAT Rule:

iptables -t nat -D KUBE-SEP-POEHWB2FQLHEXHSI 2

After deleting this rule, the incoming communication to the pod broke:

root@test-pod2:/# curl test-svc.default.svc.cluster.local -I
curl: (28) Failed to connect to test-svc.default.svc.cluster.local port 80 after 130275 ms: Couldn't connect to server

This confirms that without DNAT, the service cannot route traffic to the pod, and external communication fails.

Conclusion

As I explored the networking side of Kubernetes, it became clear that the iptables rules generated by Kubernetes are crucial in handling pod communication. Whether it’s traffic going out of the cluster or external requests coming in via Ingress, Kubernetes uses SNAT and DNAT effectively to manage IP translation.

This is a powerful reminder that although Kubernetes abstracts away many networking complexities, it’s still important to understand the underlying mechanics. By knowing how SNAT and DNAT work in Kubernetes, you can troubleshoot networking issues.

I hope this explanation helps you get a better understanding of how your pods communicate with external services and how ingress traffic is handled.

If you have any query related to this topic, please add a comment or ping me on LinkedIn. Stay curious and keep exploring!

Scroll to Top