Keystone DB gets all non vip endpoints + openstack service conf files get keystone non vip
| Affects | Status | Importance | Assigned to | Milestone | |
|---|---|---|---|---|---|
| | OpenStack keystone charm |
Medium
|
David Ames | ||
| | keystone (Juju Charms Collection) |
Medium
|
David Ames | ||
Bug Description
After battling with HA deploys for a while now, I have found the root cause(s) of my problems.... they are such that keystone creates non vip endpoints in the keystone.endpoints table for each openstack service to which it is related, AND all openstack service charms get non vip keystone endpoints transcluded into their .conf files.
I'm deploying all next branch charms minus glance-
juju --version <- 1.24.6-trusty-amd64
juju status --format tabular <- http://
deployer.yaml <- http://
It would be nice to see this fixed before the 15.10 release so as we can move forward into the next cycle with more stable HA deployment capability!
Thanks!
| James Page (james-page) wrote : | #1 |
| james beedy (jamesbeedy) wrote : | #2 |
To note: Initially 1 of the keystone endpoints (the admin endpoint of http://
| james beedy (jamesbeedy) wrote : | #3 |
Here is the keystone.endpoint db table from a recent deploy: http://
trusty-
One identification I can make here, is that ceph-radosgw endpoint is the only endpoint set to a vip. Interestingly enough, radosgw is the only stateless service charm that still uses the old-school way of setting its endpoint ...in the sense that it does not have the same params checking and creation methods that the rest of the charms do within the ha-relation-
One workaround for this issue could be to set os-public-endpoint params for each charm..
| james beedy (jamesbeedy) wrote : | #4 |
^^ trusty-
| james beedy (jamesbeedy) wrote : | #5 |
I feel this is actually more of a conceptual misunderstanding. After pondering this issue for a while now, it dawned on me.....
You cannot create a VIP on a bridge interface.
You cannot create a VIP on an interface to which a bridge is bound.
The former, is due to the fact that the bridge owns the interface to which it is bound, the later is because the bridge is already a virtual device.
1. Using MAAS as a provider, 'juju-br0' is created on the primary interface ('eth0' or last interface provisioned with gateway route) for physical/virtual nodes.
2. Juju + MAAS provisioned containers only have 1 network interface and it gets owned by 'juju-br0'.
2.a. 'juju-br0' owns the interface it is bridged on (no vips can be created on 'eth0' when 'juju-br0' is bound to 'eth0').
2.b. You can't create a VIP on a virtual bridge interface (i.e. 'juju-br0')!
2.c. Containers provisioned by juju only have one network interface (hence no option for VIP).
I believe this proves that currently:
- HA deployed service units must configure VIP on interface other than interface which 'juju-br0' binds to (must set 'vip_iface' and 'ha-bindiface' to interface other than the primary).
- Services deployed to containers not eligible for HA.
Let me know your thoughts.
Thanks!
| james beedy (jamesbeedy) wrote : | #6 |
Extra notes:
2.c is redundant of 2.
I take back the work-around from comment #3.
| Ryan Beisner (1chb1n) wrote : | #7 |
Thank you for the bug report. We're working to reproduce in our bare metal lab. Once triaged, the bug will get status updates.
| Changed in keystone (Juju Charms Collection): | |
| assignee: | nobody → David Ames (thedac) |
| status: | New → In Progress |
| David Ames (thedac) wrote : | #8 |
James, in your comments above, it seems we are mixing physical host network setup (juju-br0) with lxc network setup. The lxcs should not have juju-br0 in them.
juju-br0 is used on the physical host to bridge traffic for itself and all of its lxcs.
We can apply a vip (an alias) to eth0 *inside* the lxc. We are doing that for ServerStack and I just stood up an independent test in another MAAS environment.
On the physcial host there are several bridges that allow the virtual networking. But notice that eth0 is a member of juju-br0 and that the IP for the physical host is on juju-br0. This is by design and works.
Example lxc eth0
sudo ip addr
43330: eth0: <BROADCAST,
link/ether 00:16:3e:b0:31:b1 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.50/24 brd 10.245.167.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.0.0.200/24 brd 10.245.167.255 scope global secondary eth0
valid_lft forever preferred_lft forever
inet6 fe80::216:
valid_lft forever preferred_lft forever
In this example 10.0.0.200 is the vip and .50 is the main ip for the lxc. Corosync decides which node has the alias for the vip.
So, we have not yet been able to recreate failure to distribute keystones vip. We still need to figure out what is different in your environment.
| james beedy (jamesbeedy) wrote : | #9 |
thedac, this makes total sense....I'll be looking further into reproducing this and getting more info on what is really going on. I appreciate your looking into this. I'll keep you posted. Thanks
| David Ames (thedac) wrote : | #10 |
James,
Just an update. I spent the week trying to reproduce by deploying via MAAS using most of [1]. Unfortunately I was not able to reproduce this.
If you see this again please provide the following from the nodes that have presented non-VIP addresses to keystone
As James Page requested we would need: sudo crm status
The /var/log/juju/* logs
| Changed in keystone (Juju Charms Collection): | |
| status: | In Progress → Incomplete |
| Changed in keystone (Juju Charms Collection): | |
| importance: | Undecided → Medium |
| Changed in charm-keystone: | |
| assignee: | nobody → David Ames (thedac) |
| importance: | Undecided → Medium |
| status: | New → Incomplete |
| Changed in keystone (Juju Charms Collection): | |
| status: | Incomplete → Invalid |

"they are such that keystone creates non vip endpoints in the keystone.endpoints table for each openstack service to which it is related, AND all openstack service charms get non vip keystone endpoints transcluded into their .conf files."
This would indicate that clustered never successfully completed; the principle charms won't re-registered VIP's until the hacluster subordinates have indicated that clustering has been completed.
A 'sudo crm status' on any of the haclustered' services would be useful at this point as well.