wrong vip setup for interface when network interface naming is not consistent across units.

Bug #1727824 reported by Jason Hobbs
18
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Canonical Juju
Incomplete
Low
John A Meinel
OpenStack Neutron API Charm
Triaged
Medium
Unassigned

Bug Description

I'm using neutron-api with two VIPs:

value of "vip" config: 10.245.208.97 192.168.33.9

eth0 is on 10.245.208.x and eth1 is on 192.168.33.x:
http://paste.ubuntu.com/25825141/

However, the same vip is getting setup for both interfaces:
http://paste.ubuntu.com/25825137/

Here is the bundle we're using:
https://pastebin.canonical.com/201750/

juju logs are available here: https://10.245.162.101/artifacts/b432a78b-773e-43ee-b715-d0d10028fa7c/cpe_cloud_237/juju-crashdump-a5ae8981-5a5f-4bee-8692-8c7f5528b376.tar.gz

Revision history for this message
James Page (james-page) wrote :

I think this is caused by the containers not coming up with consistent network interface naming; the first hacluster unit appears to be trying to configure resources the other way around to the other two units

     eth0 eth1
-0 192.xxx 10.245.xxx
-1 10.245.xxx 192.xxx
-2 can't actually tell

resources in the pacemaker cluster for VIP's must have consistent network device naming, so I'm not entirely sure how much we can do about this in the charm. Does juju make any guarantees with regards to network interface ordering when binding network spaces?

summary: - wrong vip setup for interface
+ wrong vip setup for interface when network interface naming is not
+ consistent across units.
Changed in juju:
status: New → Incomplete
Changed in charm-neutron-api:
status: New → Triaged
importance: Undecided → Medium
Tim Penhey (thumper)
Changed in juju:
assignee: nobody → John A Meinel (jameinel)
Revision history for this message
James Page (james-page) wrote :

Reading the reference docs for the heartbeat resources we use for VIP, we can actually not configure the resources with nic and netmask - heartbeat will try to figure this out itself when the resource is promoted on a unit.

This sounds like a sensible way forward; I'd like to switch one charm over to take this approach first, and then we can assess whether that works well or not.

Revision history for this message
James Page (james-page) wrote :

There is complexity here as the pacemaker resources have the interface name encoded into them; so we also need to re-jig the resource naming, which for and existing cluster would mean deleting the existing VIP resources, and then re-creating them with the new names (which is disruption to service).

Revision history for this message
James Page (james-page) wrote :

If we used a short hash of the VIP being configured in its name, than that would also deal with the requirement to floating IPv4 and IPv6 VIP's on the same network interfaces.

Revision history for this message
Liam Young (gnuoy) wrote :

Could we allow a new value for vip_iface of 'auto'. If set to auto then the iface is not longer passed on the ha interface and pacemaker is left to figure out the correct interface. I iface is set to an interface or is empty the old behaviour is preserved.

Revision history for this message
Liam Young (gnuoy) wrote :

Actually maybe the magic value should be 'crm_auto' to show that the interface calculation is being done outside of the charm.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on charm-neutron-api (master)

Change abandoned by James Page (<email address hidden>) on branch: master
Review: https://review.openstack.org/531359

Revision history for this message
Canonical Juju QA Bot (juju-qa-bot) wrote :

This bug has not been updated in 2 years, so we're marking it Low importance. If you believe this is incorrect, please update the importance.

Changed in juju:
importance: Undecided → Low
tags: added: expirebugs-bot
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.