Rabbitmq is down during the converge. This issue could hang until heat timeout:
Full list of resources:
ip-172.21.35.10 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0
ip-172.21.33.10 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1
ip-10.12.149.6 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2
Clone Set: haproxy-clone [haproxy]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Master/Slave Set: galera-master [galera]
Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
ip-172.21.33.11 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0
Clone Set: rabbitmq-clone [rabbitmq]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Master/Slave Set: redis-master [redis]
Masters: [ overcloud-controller-0 ]
Slaves: [ overcloud-controller-1 overcloud-controller-2 ]
ip-10.12.149.91 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1
ip-172.21.36.10 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2
openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-0
Failed Actions:
* rabbitmq_monitor_10000 on overcloud-controller-2 'not running' (7): call=86, status=complete, exitreason='none',
last-rc-change='Thu Oct 20 12:07:55 2016', queued=1363ms, exec=1601ms
PCSD Status:
overcloud-controller-0: Online
overcloud-controller-1: Online
overcloud-controller-2: Online
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
$ heat resource-list overcloud -n 5 | grep -i in_pro
WARNING (shell) "heat resource-list" is deprecated, please use "openstack stack resource list" instead
| AllNodesDeploySteps | 4818c1d2-1022-4cd3-b63f-a9737ef6aec3 | OS::TripleO::PostDeploySteps | CREATE_IN_PROGRESS | 2016-10-20T12:04:36Z | overcloud |
| ComputeDeployment_Step3 | 5a82a140-80db-44a7-9c03-84269ab071b3 | OS::Heat::StructuredDeploymentGroup | CREATE_IN_PROGRESS | 2016-10-20T12:04:37Z | overcloud-AllNodesDeploySteps-eedu2uqm2msi |
| ControllerDeployment_Step3 | 04a3f286-1f2c-4025-93fd-ce01762eedc3 | OS::Heat::StructuredDeploymentGroup | CREATE_IN_PROGRESS | 2016-10-20T12:04:38Z | overcloud-AllNodesDeploySteps-eedu2uqm2msi |
| 0 | a5c19dd4-5372-47a5-a36e-2f82f0205c1f | OS::Heat::StructuredDeployment | CREATE_IN_PROGRESS | 2016-10-20T12:12:08Z | overcloud-AllNodesDeploySteps-eedu2uqm2msi-ComputeDeployment_Step3-jxawuhg6hkme |
| 0 | fa127e05-e2e6-48fd-bf66-2e1949ccf1cd | OS::Heat::StructuredDeployment | CREATE_IN_PROGRESS | 2016-10-20T12:12:08Z | overcloud-AllNodesDeploySteps-eedu2uqm2msi-ControllerDeployment_Step3-k4zox3i4xrmm |
| 1 | 4bf218da-3f8a-4554-91e8-d62b57f8e4ae | OS::Heat::StructuredDeployment | CREATE_IN_PROGRESS | 2016-10-20T12:12:08Z | overcloud-AllNodesDeploySteps-eedu2uqm2msi-ControllerDeployment_Step3-k4zox3i4xrmm |
| 2 | 32021dc1-f4ce-4440-9482-17e3cb226013 | OS::Heat::StructuredDeployment | CREATE_IN_PROGRESS | 2016-10-20T12:12:08Z | overcloud-AllNodesDeploySteps-eedu2uqm2msi-ControllerDeployment_Step3-k4zox3i4xrmm
A pcs resource cleanup solve the issue,
I think it would be only a doc patch here.