Migrating from F5 BIG-IP to Tula
Migrating from F5 BIG-IP to Tula is a straightforward process once you understand the conceptual mapping between the two platforms. Both solve the same problems -- load balancing, health checking, SSL termination, and high availability -- but use different terminology and configuration approaches. This guide provides a structured migration path from an existing F5 deployment to Tula.
Concept Mapping
Understanding how F5 concepts translate to Tula is the foundation of a successful migration.
| F5 BIG-IP |
Tula |
| Virtual Server |
VIP (Virtual IP) |
| Pool |
Backend Group |
| Pool Member / Node |
Real Server (Backend) |
| Monitor |
Health Check |
| iRule |
HAProxy ACL / Routing Rule |
| Profile (TCP/HTTP/SSL) |
VIP Settings |
| SNAT / SNAT Pool |
NAT Mode |
| Persistence Profile |
Session Persistence |
| Traffic Group |
VRRP Group / Floating IP |
| Device Trust / Device Group |
Cluster |
| SSL Client Profile |
SSL Certificate (VIP) |
| SSL Server Profile |
SSL Backend |
Step 1: Export and Audit Your F5 Configuration
- Export the F5 configuration:
tmsh save sys config
tmsh list > /var/tmp/f5-config-export.txt
- Document all virtual servers, pools, monitors, iRules, and SSL profiles.
- Identify which virtual servers actively receive traffic -- prioritise these and consider retiring unused configurations.
- Flag any iRules with complex TCL logic that may need adaptation for HAProxy ACLs.
Step 2: Map Virtual Servers to VIPs
For each F5 virtual server, create a corresponding Tula VIP.
- Navigate to Load Balancing > Virtual IPs and click Add VIP.
- Map the properties: F5 Destination to Tula IP Address/Port, F5 IP Protocol to Protocol, F5 Source Address Translation to NAT Mode, and F5 Persistence Profile to Session Persistence.
- Create an L4 VIP (nftlb) for TCP/UDP virtual servers without HTTP profiles, or an L7 VIP (HAProxy) for those with HTTP profiles.
- Click Save.
Step 3: Recreate Pools and Monitors
For each F5 pool, create the equivalent backend group and health checks in Tula.
- Within each VIP, navigate to Backends and add the pool members:
- F5 Node Address maps to Tula Backend IP Address.
- F5 Service Port maps to Tula Backend Port.
- F5 Ratio maps to Tula Weight.
- F5 Priority Group can be implemented using Tula backend states (Active / Standby).
- Configure health checks to match the F5 monitors:
- F5
tcp monitor maps to Tula TCP health check.
- F5
http / https monitor maps to Tula HTTP / HTTPS health check. Transfer the send string and expected receive string.
- F5
gateway_icmp monitor maps to a Tula TCP check on the service port (Tula does not use ICMP-only monitors as they are unreliable indicators of service health).
- Match the Interval, Timeout, and Failure Threshold values.
- Click Save.
Step 4: Migrate SSL Certificates
- Export certificates and private keys from the F5:
tmsh list sys crypto cert <cert-name>
tmsh list sys crypto key <key-name>
- In Tula, navigate to System > SSL Certificates and click Import Certificate.
- Upload the certificate file, private key, and any intermediate certificates.
- Assign the certificate to the appropriate VIP under its SSL configuration.
- For certificates managed by Let's Encrypt, consider switching to Tula's built-in Let's Encrypt automation instead of importing the existing certificate.
Step 5: Test in Parallel
Running both systems in parallel reduces risk.
- Configure Tula VIPs with the same backends as F5, using temporary IP addresses.
- Test each service against the Tula VIP:
curl -H "Host: example.com" http://<tula-vip-address>/
- Validate response correctness, persistence behaviour, and health check accuracy.
- Gradually shift traffic using DNS weighted routing. Monitor error rates and latency.
Step 6: Cut Over
Once parallel testing confirms correct behaviour:
- Schedule a maintenance window if zero downtime is required.
- Reassign production IPs from F5 to Tula. Update VIP addresses to match the original F5 virtual server addresses.
- Update static routes and firewall rules referencing F5 IPs.
- Click Apply Configuration and verify traffic flow.
- Monitor for 24-48 hours. Keep F5 available as a rollback option.
Common Gotchas
- iRule complexity: Simple iRules (redirects, header insertion, URI routing) map directly to HAProxy ACLs. Complex iRules with TCL logic or data groups may need rethinking -- contact Tula support for guidance.
- Priority group activation: Achievable in Tula using backend Active/Standby states.
- Connection mirroring: Not available in Tula. Use cookie-based session persistence for stateful failover.
- OneConnect: HAProxy provides equivalent HTTP connection multiplexing by default.
- Partitions / route domains: Plan equivalent segmentation using VLANs and separate VIP configurations.