The single-nova-consoleauth parameter is intended to make sure that at any given time, there is only one, Pacemaker-managed, instance active of the nova-consoleauth service in an HA cluster.
While the charm correctly injects an override file for the nova-consoleauth service, ensuring that the service isn't started on system startup, Juju *itself* still starts the service on startup, and on any execution of the config-changed hook.
Steps to reproduce:
- In an HA configuration with multiple nova-cloud-controller units, reboot one controller node.
- Alternatively, change any parameter on the nova-cloud-controller service with "juju set"
- Log into the controller node after reboot. Make sure that the node does *not* run the res_nova_consoleauth resource
- Check your process table
- Observe that an instance of nova-consoleauth is now running. This should not happen.
In an OpenStack environment that misbehaves in this way, novnc console access will break with this issue:
https://ask.openstack.org/en/question/32888/unable-to-run-vnc-due-to-consoleauth-failing/
Read the code I can see how this occurs - nova-consoleauth should be managed differently once this configuration option has been set ensuring that it does not get restarted by the charm on subsequent configuration file changes.
Its also possible to use a memcache backend for the consoleauth processes - the charm should support this option, allowing multiple instances to be run.
Marking as 'Triaged' and setting to High importance.