scenario10 tempest random tempest failures in check / gate, cloud related
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
tripleo |
Fix Released
|
High
|
Unassigned |
Bug Description
(cgoncalves) yes, this is a random failure. the amphora (a Nova instance) took way too long to boot (>20 minutes).
https:/
This sort of problem could be mitigated if VMs would run on KVM (nested virtualization) instead of QEMU/TCG
Vexxhost, OVH, Fortnebula have nest virt enabled but job is configured to use QEMU because RAX nodepools do not have nest virt enabled. We could workaround that, ping cgoncalves
not sure, but looks as octavia tests issue
in how it registers resources to cleanup? actually seems that cleanup tries to delete the resource but it's unable since it is in PENDING_CREATE state, and then it's deps also fail to be deleted as in-use
a) test timedout to reach some resource (LB) in ACTIVE
b) then in cleanup some other resource (not sure what flavors are in octavia) wants to be deleted but cannot since in use (by the resource from a))
2020-01-31 07:03:53 | Body: {"loadbalancer": {"provider": "octavia", ... "provisioning_
2020-01-31 07:03:53 | Details: {u'debuginfo': None, u'faultcode': u'Client', u'faultstring': u'Flavor 00631b21-
but no idea what that second 'failed-to-cleanup' resource (flavor) is ... no other mention of that id there
so if octavia really cannot handle deletion of resources in 'pending_create' state, cleanup will there need to wait for it? [potentially indefinitelly|
Changed in tripleo: | |
milestone: | ussuri-2 → ussuri-3 |
After 31st jan, the job is passing http:// zuul.opendev. org/t/openstack /builds? job_name= tripleo- ci-centos- 7-scenario010- standalone& pipeline= gate
Still needed, we can look for a proper fix