Linux Digest

Kali Linux · Metasploit · OSCP · Pentest tutorials

Linux Load Balancer Direct Routing: High-Performance Guide

TL;DR

Linux load balancer direct routing, often referred to as LVS-DR, is a high-speed load balancing method where the director node forwards incoming packets to real servers at the data link layer (Layer 2). In this configuration, the load balancer modifies the destination MAC address of the packet to match a backend server while keeping the destination IP (the Virtual IP) intact, allowing the backend server to process the request and reply directly to the client. This approach eliminates the load balancer as a bottleneck for return traffic, making it ideal for high-bandwidth applications.

Understanding the Mechanics of Direct Routing

In a standard NAT-based load balancing setup, the load balancer sits in the middle of both the request and the response. While functional, this creates a performance ceiling because the balancer must process every byte of outgoing data. Direct Routing changes this dynamic by using Direct Server Return (DSR). When a packet arrives at the load balancer (the Director), the Director looks at its connection table, selects a real server, and rewrites the destination MAC address of the frame. It then places the frame back onto the local network segment.

Because the IP header remains unchanged, the real server receives a packet addressed to the Virtual IP (VIP). For this to work, the real server must be configured to accept traffic for the VIP, typically by assigning it to a loopback interface. The real server processes the request and sends the response directly to the client’s IP. The client receives a packet from the VIP, unaware that the load balancer was only involved in the initial handoff. This asymmetry is why LVS-DR can handle thousands of concurrent connections with minimal CPU overhead on the Director.

The Critical Role of Layer 2 Adjacency

The defining constraint of Linux load balancer direct routing is that the Director and all real servers must be on the same broadcast domain. Since the Director forwards packets by changing the MAC address, it cannot route those packets across a router to a different subnet. If you are designing a red team lab or a production environment, ensure your network scanner confirms that all nodes are within the same VLAN or physical switch. You can use a network scanner to verify the reachability and subnet alignment of your intended backend nodes before deployment.

LVS-DR is essentially a "MAC-address-swapping" trick. If the Director cannot see the MAC address of the real server via ARP, the packet cannot be delivered.

Solving the ARP Flux Problem

The most common failure point in LVS-DR deployments is the ARP Flux problem. Since the VIP is configured on both the Director and the loopback interfaces of all real servers, multiple machines may try to claim the VIP when an ARP request is broadcast. If a real server responds to an ARP request for the VIP instead of the Director, the traffic will bypass the load balancer entirely, breaking the distribution logic and potentially causing intermittent connectivity.

Kernel Parameters for ARP Suppression

To fix this, we must instruct the Linux kernel on the real servers to remain silent when ARP requests for the VIP arrive. This is achieved by modifying sysctl parameters. Specifically, we focus on the arp_ignore and arp_announce settings. These settings ensure that the real server only responds to ARP requests for its own primary IP address, not the VIP hidden on its loopback interface.

By applying these settings to the all and lo interfaces in /etc/sysctl.conf, the real server becomes "invisible" to ARP requests for the VIP, allowing the Director to maintain control over incoming traffic. This is a vital step during OSCP prep or any hands-on configuration task involving LVS.

Step-by-Step Configuration of LVS-DR

Setting up Linux load balancer direct routing requires tools like ipvsadm for the Director and basic networking utilities for the real servers. For a production-ready environment, Keepalived is typically used to manage ipvsadm rules and provide high availability for the Director itself. Before beginning, it is wise to perform DNS enumeration to ensure your VIP records are correctly mapped. You can follow this DNS Enumeration with Dnsenum guide to verify your environment's naming scheme.

1. Configuring the Director Node

The Director needs the VIP assigned to its external interface. For example, if eth0 is the primary interface, you might add the VIP as eth0:0. Once the interface is up, use ipvsadm to define the service. A typical command looks like ipvsadm -A -t 192.168.1.100:80 -s rr, which creates a virtual service on the VIP at port 80 using the Round Robin scheduler. Next, add the real servers using the -g flag, which specifies the Gatewaying (Direct Routing) method: ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.101:80 -g.

2. Preparing the Real Servers

On each real server, the VIP must be added to the loopback interface (lo:0) with a netmask of 255.255.255.255. This prevents the server from trying to route traffic for the entire subnet through the loopback. After adding the IP, apply the sysctl tweaks mentioned earlier. Verify the configuration by checking if the server can still reach the gateway through its primary interface while holding the VIP on its loopback.

3. Verifying the Traffic Flow

From a client machine, attempt to access the service on the VIP. Use tcpdump on the real server to watch for incoming traffic. You should see packets arriving with the destination IP of the VIP but with the local MAC address of the real server's interface. If you see the traffic arriving but no response reaching the client, check the server's default gateway and firewall rules. For more advanced troubleshooting and security auditing, you can use a online port scanner to confirm the VIP is correctly exposing only the intended ports.

Component IP Configuration Role in LVS-DR
Director VIP (Public) & DIP (Local) Manages connection table; rewrites MAC addresses.
Real Server 1 RIP (Local) & VIP (Loopback) Processes requests; responds directly to client.
Real Server 2 RIP (Local) & VIP (Loopback) Processes requests; responds directly to client.
Client CIP (Public/Local) Sends request to VIP; receives response from VIP.

Pentesting and Auditing LVS-DR Environments

From a security perspective, Linux load balancer direct routing introduces unique vectors and challenges. Because LVS-DR relies on Layer 2 manipulation, it is susceptible to internal attacks if the local network is compromised. Pentesters should look for these configurations when performing post-exploitation or internal infrastructure assessments. If you find a load balancer, use a tool like Nessus to scan for vulnerabilities in the underlying services. This Kali Linux Nessus Setup guide provides a solid foundation for this phase.

Identifying LVS-DR via Fingerprinting

Detecting LVS-DR from the outside is difficult because the responses look like they come from a single host. However, subtle timing differences or TCP timestamp variations between backend servers can leak the existence of a load balancer. If you are on the same local network, you can identify LVS-DR by comparing the MAC address in the Ethernet header of the response to the MAC address of the Director. If they differ, and the response MAC belongs to a different host than the one you sent the request to, you are likely looking at a DSR/Direct Routing setup.

ARP Spoofing and VIP Hijacking

Since LVS-DR relies on the Director being the "owner" of the VIP's MAC address in the ARP tables of the gateway and clients, ARP spoofing is a devastating attack. By poisoning the ARP cache of the real servers or the Director, an attacker can intercept traffic or cause a Denial of Service (DoS). Furthermore, if a real server is compromised, an attacker could disable the arp_ignore settings, causing ARP flux and disrupting the entire cluster's traffic flow.

To protect against these threats, security teams should implement Snort IDS to monitor for suspicious ARP activity. Refer to the Snort IDS Installation on Kali Linux guide to set up monitoring that can detect MAC address changes and ARP storms in real-time.

Key Takeaway: Security in LVS-DR is heavily dependent on the integrity of the Layer 2 domain. If an attacker gains "on-link" access, the entire load balancing logic can be bypassed or subverted.

Performance Comparison: DR vs. NAT vs. TUN

Choosing the right load balancing method depends on your throughput requirements and network topology. LVS-DR is almost always the winner for pure speed, but it lacks the flexibility of LVS-NAT or LVS-TUN. In a NAT environment, the backend servers can be in any private subnet, and the load balancer handles the translation. This is easier to configure but limits the total bandwidth to the capacity of the Director's network interface.

LVS-TUN (IP Tunneling) allows real servers to be located on different subnets or even different geographical locations. It wraps the original packet in a new IP header. Like DR, it supports Direct Server Return, but it incurs a small overhead due to encapsulation and requires the backend servers to support IPIP tunneling. For most high-performance data centers where servers share a rack or a VLAN, Linux load balancer direct routing remains the gold standard.

Hardening Your LVS-DR Infrastructure

Once the Linux load balancer direct routing setup is functional, hardening is the next priority. Start by minimizing the services running on the Director node. It should act as a pure traffic cop, not a web server or database. Use iptables or nftables to restrict access to the VIP only on the necessary ports. Ensure that the ip_forward sysctl parameter is enabled only if necessary, though LVS-DR technically operates below the standard routing table for forwarded packets.

On the backend servers, ensure the loopback VIP is not reachable from the outside world except through the Director. You can use firewalld or iptables to drop any packets arriving at the physical interface that are addressed to the VIP but do not originate from the Director’s MAC address. This prevents attackers from bypassing the load balancer by sending crafted packets directly to the backend servers.

For persistent monitoring, consider integrating your logs with a HIDS solution. Monitoring the /var/log/messages or journalctl for Keepalived state changes can provide early warning of node failures or split-brain scenarios. Following a structured hardening guide, similar to the OSSEC Host Intrusion Detection Guide, will help maintain the integrity of your load-balanced cluster.

Frequently Asked Questions

Why is my LVS-DR setup not responding to requests?
The most common cause is the ARP Flux problem. If the real servers are responding to ARP requests for the VIP, the gateway may be sending traffic to a real server that isn't prepared to handle it, or the Director is being bypassed. Check your arp_ignore and arp_announce settings. Also, ensure the VIP is correctly added to the loopback interface with a /32 mask.

Can I use LVS-DR with servers in different subnets?
No. Linux load balancer direct routing relies on modifying the MAC address of the frame. MAC addresses are only relevant within a single Layer 2 broadcast domain. If your servers are in different subnets, you must use LVS-TUN or LVS-NAT instead.

Does LVS-DR support SSL termination?
LVS-DR itself does not perform SSL termination because it does not inspect the application layer (Layer 7). It simply passes the packets through. If you need SSL termination, you must either terminate the SSL on each real server individually or use a Layer 7 load balancer like HAProxy or Nginx in front of your LVS setup.

Is Direct Routing faster than HAProxy?
In terms of raw packet throughput and latency, yes. LVS-DR operates at the kernel level (Layer 4) and uses Direct Server Return, meaning it does far less work than a user-space proxy like HAProxy, which must manage two separate TCP connections for every request. However, HAProxy offers more advanced features like cookie-based persistence and header manipulation.