had a look here as homework from yesterday's ci escalation call (marios|ruck ci sprint 23).
This is definitely not failing consistently, for example current run on the master gate check is green for both scenario000-multinode-oooq-container-updates [1] and upgrades [2] so this supports Thomas assessment in comment #1 about a race condition.
Digging a little further I more suspect this patch [3] as responsible and in particular am wondering if the hacky sleep at [4] was helping us to avoid the race (though we might be able to come up with a better solution...)
had a look here as homework from yesterday's ci escalation call (marios|ruck ci sprint 23). multinode- oooq-container- updates [1] and upgrades [2] so this supports Thomas assessment in comment #1 about a race condition.
This is definitely not failing consistently, for example current run on the master gate check is green for both scenario000-
Digging a little further I more suspect this patch [3] as responsible and in particular am wondering if the hacky sleep at [4] was helping us to avoid the race (though we might be able to come up with a better solution...)
[1] http:// logs.openstack. org/45/ 560445/ 203/check/ tripleo- ci-centos- 7-scenario000- multinode- oooq-container- updates/ fbe4a42/ logs.openstack. org/45/ 560445/ 203/check/ tripleo- ci-centos- 7-scenario000- multinode- oooq-container- upgrades/ f05d39d/ /review. openstack. org/#/q/ I39b5a1a154b106 83ac0de85afd6bb adc3491192a /review. openstack. org/#/c/ 563000/ 2/tripleoclient /workflows/ package_ update. py@92
[2] http://
[3] https:/
[4] https:/