Layer 4 Load Balancing
Configure TCP/UDP load balancing with nftlb
Layer 4 Load Balancing
Layer 4 load balancing operates at the transport layer of the OSI model, making forwarding decisions based on IP addresses and TCP or UDP port numbers. Because it does not inspect application-layer content, L4 balancing is protocol-agnostic and introduces minimal latency, making it suitable for high-throughput and low-latency workloads.
How It Works
Tula's Layer 4 load balancing is powered by nftlb, a high-performance load balancer built on the Linux nftables framework. nftlb operates entirely within the kernel networking stack, which eliminates the overhead of proxying connections through a userspace process. Incoming packets matching a VIP are rewritten and forwarded directly to a selected backend server according to the configured algorithm.
Unlike Layer 7 load balancing, L4 does not terminate connections. The load balancer acts as a transparent forwarding engine. This means the original client connection is preserved end-to-end, and the load balancer does not need to maintain per-connection state for protocol parsing.
Supported Protocols
Layer 4 VIPs in Tula support the following protocols:
| Protocol | Use Cases |
|---|---|
| TCP | HTTP/HTTPS (passthrough), databases, mail servers, SSH, custom TCP services |
| UDP | DNS, RADIUS, syslog, VoIP (SIP/RTP), game servers |
Because L4 operates below the application layer, it can balance any protocol that runs over TCP or UDP without requiring protocol-specific configuration.
Configuring an L4 VIP
To create a Layer 4 VIP:
- Navigate to Load Balancing > Virtual IPs and click Add VIP.
- Set the Protocol to TCP or UDP.
- Provide the VIP IP address and port. For services spanning multiple ports, configure a separate VIP for each port or use a port range where supported.
- Select a load balancing algorithm. Round Robin and Least Connections are common choices for L4 workloads.
- Add one or more backend servers with their IP addresses, ports, and optional weights.
- Configure health checks appropriate for the service. A TCP connect check is the most common health check type for L4 services.
- Save and Apply the configuration.
Backend Server Pools
Backend servers in an L4 VIP form a server pool. Each backend is defined by its IP address, port, and weight. Tula monitors backend health and automatically removes failed servers from the active pool, redistributing traffic among the remaining healthy backends. When a failed backend recovers and passes its health checks, it is automatically reintroduced to the pool.
Connection-Based vs. Packet-Based Distribution
nftlb supports two forwarding modes for L4 traffic:
- Connection-based (stateful): The load balancer tracks connections using nftables conntrack. Once a connection is assigned to a backend, all packets belonging to that connection are forwarded to the same backend. This is the default mode and is appropriate for TCP traffic and most UDP services.
- Packet-based (stateless): Each individual packet is independently assigned to a backend based on the algorithm. This mode is useful for stateless UDP protocols such as DNS, where each packet is an independent request.
The forwarding mode is selected automatically based on the protocol and configuration. TCP VIPs always use connection-based tracking. UDP VIPs default to connection-based tracking but can be configured for stateless distribution when appropriate.
Direct Server Return (DSR)
Layer 4 VIPs support Direct Server Return, where response traffic from backend servers bypasses the load balancer and flows directly to the client. DSR significantly reduces load on the balancer for asymmetric workloads where responses are much larger than requests. See the DSR documentation for configuration details.
Next Steps
For workloads that require content-aware routing, SSL termination, or HTTP header manipulation, see Layer 7 Load Balancing. To understand the available scheduling algorithms, see Load Balancing Algorithms.