https://openstack-ci-reports.ubuntu.com/artifacts/test_charm_pipeline_amulet_full/openstack/charm-rabbitmq-server/474958/12/278/test_charm_amulet_full_261/amulet-full.txt
DEBUG:runner:2017-06-16 23:44:48,586 test_901_remove_unit DEBUG: Checking that units correctly clean up after themselves on unit removal...
DEBUG:runner:2017-06-16 23:44:48,586 _configure_services INFO: OpenStackAmuletDeployment: configure services
DEBUG:runner:2017-06-16 23:48:03,323 _auto_wait_for_status INFO: Waiting for extended status on units...
DEBUG:runner:2017-06-16 23:48:03,328 _auto_wait_for_status DEBUG: Custom extended status wait match: ^Unit is ready and clustered$
DEBUG:runner:2017-06-16 23:48:03,328 _auto_wait_for_status DEBUG: Waiting up to 1200s for extended status on services: ['rabbitmq-server']
DEBUG:runner:2017-06-16 23:48:03,749 _auto_wait_for_status INFO: OK
DEBUG:runner:2017-06-16 23:48:19,423 _auto_wait_for_status INFO: Waiting for extended status on units...
DEBUG:runner:2017-06-16 23:48:19,424 _auto_wait_for_status DEBUG: Custom extended status wait match: ^Unit is ready and clustered$
DEBUG:runner:2017-06-16 23:48:19,424 _auto_wait_for_status DEBUG: Waiting up to 1200s for extended status on services: ['rabbitmq-server']
DEBUG:runner:2017-06-16 23:48:19,749 _auto_wait_for_status INFO: OK
DEBUG:runner:2017-06-16 23:48:29,193 get_unit_hostnames DEBUG: Unit host names: {'rabbitmq-server/6': 'juju-4d268d-auto-osci-sv06-15', 'rabbitmq-server/7': 'juju-4d268d-auto-osci-sv06-16'}
DEBUG:runner:2017-06-16 23:48:34,445 run_cmd_unit DEBUG: rabbitmq-server/6 `rabbitmqctl cluster_status` command returned 0 (OK)
DEBUG:runner:2017-06-16 23:48:34,446 get_rmq_cluster_status DEBUG: rabbitmq-server/6 cluster_status:
DEBUG:runner:Cluster status of node 'rabbit@juju-4d268d-auto-osci-sv06-15' ...
DEBUG:runner:[{nodes,[{disc,['rabbit@juju-4d268d-auto-osci-sv06-15',
DEBUG:runner: 'rabbit@juju-4d268d-auto-osci-sv06-16',
DEBUG:runner: 'rabbit@juju-4d268d-auto-osci-sv06-17']}]},
DEBUG:runner: {running_nodes,['rabbit@juju-4d268d-auto-osci-sv06-16',
DEBUG:runner: 'rabbit@juju-4d268d-auto-osci-sv06-17',
DEBUG:runner: 'rabbit@juju-4d268d-auto-osci-sv06-15']},
DEBUG:runner: {cluster_name,<<"rabbit@juju-4d268d-auto-osci-sv06-17">>},
DEBUG:runner: {partitions,[]}]
DEBUG:runner:2017-06-16 23:48:39,713 run_cmd_unit DEBUG: rabbitmq-server/7 `rabbitmqctl cluster_status` command returned 0 (OK)
DEBUG:runner:2017-06-16 23:48:39,713 get_rmq_cluster_status DEBUG: rabbitmq-server/7 cluster_status:
DEBUG:runner:Cluster status of node 'rabbit@juju-4d268d-auto-osci-sv06-16' ...
DEBUG:runner:[{nodes,[{disc,['rabbit@juju-4d268d-auto-osci-sv06-15',
DEBUG:runner: 'rabbit@juju-4d268d-auto-osci-sv06-16']}]},
DEBUG:runner: {running_nodes,['rabbit@juju-4d268d-auto-osci-sv06-15',
DEBUG:runner: 'rabbit@juju-4d268d-auto-osci-sv06-16']},
DEBUG:runner: {cluster_name,<<"rabbit@juju-4d268d-auto-osci-sv06-17">>},
DEBUG:runner: {partitions,[]}]
DEBUG:runner:['Cluster registration check failed on rabbitmq-server/6: rabbit@juju-4d268d-auto-osci-sv06-17 should not be registered with RabbitMQ after unit removal.\n']
DEBUG:runner:Exit Code: 1
DEBUG:bundletester.utils:Updating JUJU_MODEL: "auto-osci-sv06:admin/auto-osci-sv06" -> ""
DEBUG:bundletester.fetchers:git rev-parse HEAD: 46a0b0760e236fc4ea7909f59d361a925838cd6f
ERROR: InvocationError: '/var/lib/jenkins/checkout/0/rabbitmq-server/.tox/func27/bin/bundletester -vl DEBUG -r json -o func-results.json --test-pattern gate-* --no-destroy'
___________________________________ summary ____________________________________
ERROR: func27: commands failed
The test will happily remove the leader unit or the unit that formed and gave name to the cluster. In such an event the test will fail.
It is debatable whether the remove code should take this into consideration and try to update the cluster name or not.
In the mean time we could improve the test to avoid removing the unit that gave name to the cluster so we don't get intermittent failures.
The root cause may be that the registration check and or regex is too wide and matches on the cluster_name row? That check should really just hit on nodes listed after "nodes". I'll dig into that too and see what shows up.