I can't reproduce it manually. Also I'm still thinking that snapshot was created from some other env because:
1. according to snapshot logs 'test_schedule_to_all_nodes' passed:
$ grep -irn "test_schedule_to_all_nodes" nailgun.test.domain.local/root/log.log:1888:{1} tempest.scenario.test_server_multinode.TestServerMultinode.test_schedule_to_all_nodes [64.947272s] ... ok node-5/root/log.log:1888:{1} tempest.scenario.test_server_multinode.TestServerMultinode.test_schedule_to_all_nodes [64.947272s] ... ok
2. I can't find the error message or server uuid in logs:
nik@snikitin: fuel-snapshot-2016-07-26_09-04-51$ grep -irn "OVS configuration failed" nik@snikitin: fuel-snapshot-2016-07-26_09-04-51$
nik@snikitin: fuel-snapshot-2016-07-26_09-04-51$ grep -irn "ada719f3-b9a7-43e3-9c13-d61e3edc60e5" nik@snikitin:fuel-snapshot-2016-07-26_09-04-51$
3. according logs from snaphot env have the following nodes:
'slave-01_controller_mongo' 'slave-02_controller_mongo' 'slave-03_controller_mongo' 'slave-04_compute_cinder' 'slave-05_compute_cinder' 'slave-06_ironic'
In the bug description mentioned only 4 nodes: "controller, compute, ironic,cinder, Telemetry - MongoDB"
Or the description is incorrect?
So I mark bug as incomplete till it will be reproduced on CI again or the correct snapshot will be attached.
I can't reproduce it manually. Also I'm still thinking that snapshot was created from some other env because:
1. according to snapshot logs 'test_schedule_ to_all_ nodes' passed:
$ grep -irn "test_schedule_ to_all_ nodes" test.domain. local/root/ log.log: 1888:{1} tempest. scenario. test_server_ multinode. TestServerMulti node.test_ schedule_ to_all_ nodes [64.947272s] ... ok root/log. log:1888: {1} tempest. scenario. test_server_ multinode. TestServerMulti node.test_ schedule_ to_all_ nodes [64.947272s] ... ok
nailgun.
node-5/
2. I can't find the error message or server uuid in logs:
nik@snikitin: fuel-snapshot- 2016-07- 26_09-04- 51$ grep -irn "OVS configuration failed" 2016-07- 26_09-04- 51$
nik@snikitin: fuel-snapshot-
nik@snikitin: fuel-snapshot- 2016-07- 26_09-04- 51$ grep -irn "ada719f3- b9a7-43e3- 9c13-d61e3edc60 e5" fuel-snapshot- 2016-07- 26_09-04- 51$
nik@snikitin:
3. according logs from snaphot env have the following nodes:
'slave- 01_controller_ mongo' 02_controller_ mongo' 03_controller_ mongo' 04_compute_ cinder' 05_compute_ cinder'
'slave-
'slave-
'slave-
'slave-
'slave-06_ironic'
In the bug description mentioned only 4 nodes: "controller, compute, ironic,cinder, Telemetry - MongoDB"
Or the description is incorrect?
So I mark bug as incomplete till it will be reproduced on CI again or the correct snapshot will be attached.