NLB targets can't connect to themselves through the NLB
When using network load balancers (NLBs) which forward traffic to targets/members without changing source IP addresses, targets/members behind the NLB will often not able to connect to themselves through listeners/front-ends on the same NLB.
This is usually due to filtering on the target/member. When the packets arrive on the target/member, the source IP and destination IP will be the same. This looks like suspicious or spoofed traffic and is subsequently dropped by the target/member's OS.
Use-cases
While this scenario may seem odd for load balancing front-end user traffic to a group of web servers, it's not uncommon for a service or component running on a server to need to talk to a load-balanced endpoint for another service which happens to be running on the same server.
In a highly available Kubernetes cluster, for example, the Kubernetes API should be made available through a load balanced endpoint and all nodes, including the control plane nodes hosting the API server, then connect to this endpoint for scalability and fault-tolerance.
Linux causes
Martian filtering
In Linux packets where the source and destination IP addresses are the same are flagged as "martians" and are consequently dropped.
You can verify if this is the cause by enabling and viewing logs for martian packets:
sudo sysctl net | grep log_martians
sudo sysctl -w net.ipv4.conf.all.log_martians=1
sudo sysctl net | grep log_martians
dmesg -w | grep martian
Martian filtering cannot be tuned or disabled.
RPF filtering
While already a moot point since Martian filtering cannot be disabled (except maybe with a custom kernel), another possible cause could be reverse path forwarding (RPF) filtering.
In strict reverse path filtering mode (rp_filter=1
), when a packet arrives on an interface, the Linux kernel checks that it was received on the interface with the best path to reach the source IP.
In loose reverse path filtering mode (rp_filter=2
), when a packet arrives on an interface, the Linux kernel only checks that it has a valid path to reach the source IP.
rp_filter=0
means disabled/no check is performed. You can check if RPF filtering is enabled with
sudo sysctl -a | grep '\.rp_filter'
It's possible packets where src_ip == dst_ip == iface_ip
would pass both strict and loose mode RPF checks, since the best path to the source is via the interface the packet was received on. However, it's unclear if this special case would be allowed for non-loopback interfaces. I haven't verified this potential cause since the traffic is already filtered by Martian filtering, but I thought it worth mentioning to highlight there could be safeguards in multiple components.
Solutions
NLB performs SNAT
The simplest solution is to have the NLB perform SNAT.
The main downside is that traffic logs which use the source IP will no longer show client IP addresses, since all the incoming traffic will have a source IP of the load balancer. Another downside is return traffic from the target/member servers will be sent back to the load balancer's IP which means slightly more overhead, and potential port reuse/exhaustion when connection volumes are very high.
In AWS, this means disabling Preserve client IP addresses on the target group.
Target/member performs SNAT
Theoretically, it may also be possible to configure the target/member server to SNAT the traffic so it appears to be coming from a different IP address. I wouldn't recommend this though since it's more of a hack and would be difficult to debug unless well documented and well understood.
Just don't
Another option is just to avoid going through the load balancer altogether. You could configure the application to just connect to the service on localhost, or add an etc/hosts
entry so that the load balancer DNS resolves to a local IP address. However, you lose the fault-tolerance of a load balanced endpoint, and it could again be difficult to debug unless it's well documented and well understood.
References
-
"NAT loopback, also known as hairpinning, is not supported when client IP preservation is enabled. This occurs when using internal Network Load Balancers, and the target registered behind a Network Load Balancer creates connections to the same Network Load Balancer. The connection can be routed to the target which is attempting to create the connection, leading to connection errors. We recommend not connecting to a Network Load Balancer from targets behind same Network Load Balancer, alternatively you can also prevent this type of connection error by disabling client IP preservation."
-
https://www.kernel.org/doc/html/latest/networking/ip-sysctl.html
rp_filter - INTEGER
- 0 - No source validation.
- 1 - Strict mode as defined in RFC3704 Strict Reverse Path Each incoming packet is tested against the FIB and if the interface is not the best reverse path the packet check will fail. By default failed packets are discarded.
- 2 - Loose mode as defined in RFC3704 Loose Reverse Path Each incoming packet’s source address is also tested against the FIB and if the source address is not reachable via any interface the packet check will fail.