Sometimes request to associate floating IP may fail, when using nova network with libvirt like:
> http://192.168.1.12:8774/v2/258a4b20c77240bf9b386411430683fa/servers/a9e734e4-5310-4191-a7f0-78fca4b367e7/action
>
> BadRequest: Bad request
> Details: {'message': 'Error. Unable to associate floating ip', 'code': '400'}
Real issue is that ebtables rootwrap call fails:
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf ebtables -t nat -I PREROUTING --logical-in br100 -p ipv4 --ip-src 192.168.32.10 ! --ip-dst 192.168.32.0/22 -j redirect --redirect-target ACCEPT
Exit code: 255
Stdout: ''
Stderr: "Unable to update the kernel. Two possible causes:\n1. Multiple ebtables programs were executing simultaneously. The ebtables\n userspace tool doesn't by default support multiple ebtables programs running\n concurrently. The ebtables option --concurrent or a tool like flock can be\n used to support concurrent scripts that update the ebtables kernel tables.\n2. The kernel doesn't support a certain ebtables extension, consider\n recompiling your kernel or insmod the extension.\n.\n"
It happens like once in whole tempest run, and also not always, so kernel support and other reasons should not apply here.
Probably already mentioned in https://<email address hidden>/msg23422.html.
As that call in nova is synchronized, locked, it could be that nova can actually race with libvirt itself calling ebtables?
Happened with Havana on RHEL6 and Icehouse on RHEL 7.
As it's flaky I don't have detailed info mostly common logs or versions - though as it's with both Havana and Icehouse on different versions of kernel etc, it seems as not related anyway.
Attaching part of nova-network.log showing that locks were obtained and command failed.