[system-test] In ceph_ha_restart test we wait all Nova services but 1 compute was destroyed
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Fix Released
|
Medium
|
Dmitry Tyzhnenko |
Bug Description
Failed on CI - http://
Fuel ISO 6.1-497
{
u'build_id': u'2015-
u'build_
u'auth_
u'fuel-
u'fuel-
u'nailgun_sha': u'3830bdcb28ec0
u'openstack
u'production': u'docker',
u'api': u'1.0',
u'python-
u'astute_sha': u'cbae24e9904be
u'fuelmain_
u'feature_
u'release': u'6.1',
u'release_
u'api': u'1.0',
u'release': u'6.1',
}}},
}
Scenario:
1. Create cluster
2. Add 3 nodes with controller and ceph OSD roles
3. Add 1 node with ceph OSD roles
4. Add 2 nodes with compute and ceph OSD roles
5. Deploy the cluster
6. Waiting up galera and cinder
7. Check ceph status
8. Run OSTF
9. Destroy osd-node
10. Check ceph status
11. Run OSTF
12. Destroy one compute node
13. Check ceph status
14. Run OSTF
15. Cold restart
16. Waiting up galera and cinder
17. Run single OSTF - Create volume and attach it to instance
18. Run OSTF
nova service list from OSTF logs - http://
Expected results:
All OSTF tests after all tries should pass
Actual result:
after step 15 OSTF failed
Changed in fuel: | |
assignee: | Fuel Library Team (fuel-library) → Oleksiy Molchanov (omolchanov) |
status: | New → Confirmed |
Changed in fuel: | |
milestone: | 7.0 → 6.1 |
summary: |
- Nova services didn't start after cold restart nodes in environment + [system-test] In ceph_ha_restart test we wait all Nova services but 1 + compute was destroyed |
tags: | added: non-release system-tests |
tags: | removed: non-release |
Changed in fuel: | |
status: | Fix Committed → Fix Released |
This is just a compute node reporting as down, not controller. We aren't managing uptime of nova-compute on controllers. Marking as medium and 7.0. A user would need to restart nova services on the controller to restore this node's functionality.