The bugzilla you've linked to is down for maintenance, so I'm going to go at this blind to what you know or have done. Bear with me! I thought the same thing when I first hit bug 933640 in a customer's deployment, where they absolutely refused to run split-DNS. They had a cluster of cattle (not pets), where any service could be running on any instance, and services only knew about each other through their global DNS hostnames, which were mapped to their floats, further distinguished by ports. I specialize in Linux networking and spent about three days going over the issue on a whiteboard, working with Vish and other folks, and hairpinning was the simplest and most elegant solution I came up with at that time, and I'll tell you my reasoning. First of all, I agree with your preliminary reading. Hairpin mode in the Linux kernel was implemented as part of a larger implementation of features to allow Linux to be a Virtual Ethernet Port Aggregator (VEPA), which is related to VEB as you mentioned. Hairpinning is absolutely an L2 functionality, and talking to your own float is indeed a L3 problem. However, getting out to your float and back in to a service that's actually listening on the same private IP you're sourcing from, without having to rewrite the client or service, is both an L2 and L3 issue. The L3 portion is common; we need to DNAT on the way to a float (ingress initiated), and SNAT on the way from it (egress initiated), so the client thinks it's talking to the original service, and the translated service thinks the translator is the original client. For talking to our own float, we actually need to do both, but from the "back" (VM rather than public) side of the host: DNAT towards our float, which translates to our (private) IP as the new destination, then SNAT on the way back to ourself, so it looks like the traffic actually came *from* our float. However, this gives us an L2 issue now. Namely that with native Linux bridges and its bridging engine's netfilter interaction (which you can see here from one of the netfilter devs: http://inai.de/images/nf-packet-flow.png), the bridge won't let the same frame egress the same port it ingressed without hairpin_mode enabled. So, unless we jump to a separate router beyond the compute host, and make *it* hairpin (same exact issue; usually this is discouraged even when straight routing, google "split horizon" and "reverse path filtering"), this is where the traffic needs to go. The iptables rules in nova-network in Diablo/Essex to DNAT/SNAT floats didn't have -s restrictions, and may or may not have -i/-o restrictions depending on nova.conf flags. This turned out to be fortuitous, because it meant that I could rely on the same rules for the usual two float NAT patterns I mentioned earlier, yet hit them from the back side, without changing any iptables rules. This left only one more minor issue, which is that the SNAT wasn't being hit on the way back to the VM, because of the iptables rule designed to -j ACCEPT fixed -> fixed VM traffic, short-circuiting before the float SNAT rules. So I had to make one minor change there; I added -m conntrack ! --ctstate DNAT (example from folsom, since essex isn't still in the upstream git repo: https://github.com/openstack/nova/blob/stable/folsom/nova/network/linux_net.py#L566). With this minor match criteria, it would skip float SNATs for fixed -> fixed traffic unless we've already DNAT'd, which should only happen when we're hairpinning, and... voila. [VM (fixed src, float dst)] -> [Host DNAT (fixed src, fixed dst)] -> [Host SNAT (float src, fixed dst)] -> (hairpin lets it back in) -> [VM (float src, fixed dst)], so all clients talk from fixed -> float, and all services listen on fixed for traffic from floats. Of course, if you're familiar with how netfilter does NAT, you'll also know that all response traffic to an initially NAT'd flow will be automatically reversed by conntrack without explicit rules, so the existing DNAT/SNAT paths for floats do all the needful things! ---------- Now, with all of this said... I can't seem to find the source code for SuSe Cloud 1.0 to verify my patches are actually in there, or that the appropriate nwfilter bits around IPv6 will work as expected, but it's been proven to work for *many* other people, with no downsides. A few things I do know could interfere with this, however, is if your VM bridge isn't hosting your fixed gateway, or your floats are on a different interface, or your bridge has promiscuous mode enabled. Vish and I worked through these things one at a time, and the customer I was solving for as an example was in-line with our (Rackspace's) opinionated approach to using nova-network and floats. In the process from Diablo -> Essex -> Folsom, we disabled bridge promiscuity and enabled hairpins on every port, and never ran into any notable issues. I'd love to find out more specifics on your environment and/or see the linux_net.py/nwfilter file from your installation, as well as hear any ideas you may have on better or different approaches! -Evan