Note, that at the previous failed BVT #63, there is the same issue with sudden interrupt in puppet logs:
"2015-03-01T07:08:45.532647+00:00 notice: (/Stage[corosync_setup]/Osnailyfacter::Cluster_ha::Virtual_ips/Cluster::Virtual_ips[public_old]/Cluster::Virtual_ip[public_old]/Cs_resource[vip__public_old]/ensure) created"
That should be the some devops environment related issue, perhaps
The stopped state of resources after the failed deployment is no issue, it is ensured by astute orchestrator - there are 3 controller nodes in cluster, but no quorum in Corosync, so no-quorum-policy=stopped stops all resources as expected.
Note, that at the previous failed BVT #63, there is the same issue with sudden interrupt in puppet logs: 01T07:08: 45.532647+ 00:00 notice: (/Stage[ corosync_ setup]/ Osnailyfacter: :Cluster_ ha::Virtual_ ips/Cluster: :Virtual_ ips[public_ old]/Cluster: :Virtual_ ip[public_ old]/Cs_ resource[ vip__public_ old]/ensure) created"
"2015-03-
That should be the some devops environment related issue, perhaps
The stopped state of resources after the failed deployment is no issue, it is ensured by astute orchestrator - there are 3 controller nodes in cluster, but no quorum in Corosync, so no-quorum- policy= stopped stops all resources as expected.