Node disk configuration reset's to default after resetting cluster
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Confirmed
|
High
|
Fuel Python (Deprecated) | ||
8.0.x |
Won't Fix
|
High
|
Fuel Python (Deprecated) |
Bug Description
Since to https:/
#Reset Environment
Click this button to reset the whole environment back to the state it was in right before the "Deploy changes" button was first clicked.
My state before "deploy changes" - non-default size of "vdc", so I expect to see this state after Reset procedure completed
VERSION:
feature_groups:
- mirantis
production: "docker"
release: "8.0"
api: "1.0"
build_number: "429"
build_id: "429"
fuel-nailgun_sha: "12b15b2351e250
python-
fuel-agent_sha: "df16d41cd7a944
fuel-
astute_sha: "c7ca63a4921674
fuel-library_sha: "3eaf4f4a9b88b2
fuel-ostf_sha: "214e794835acc7
fuel-mirror_sha: "b62f3cce5321fd
fuelmenu_sha: "85de57080a18fd
shotgun_sha: "63645dea384a37
network-
fuel-upgrade_sha: "616a7490ec7199
fuelmain_sha: "e8e36cff332644
Steps to reproduce
Scenario:
1. Create new environment
2. Choose Neutron, TUN
3. Add 5 controller
4. Add 1 compute
5. Add 2 cinder
6. Change default partitioning for cinder nodes for 'vdc'
7. Verify networks
8. Deploy the environment
9. Verify networks
10. Run OSTF tests
11. Reset cluster, Change openstack username, password, tenant and re-deploy
Actual result:
After Reset cluster, cinder vdc size restored to default
Changed in fuel: | |
milestone: | 8.0 → 9.0 |
Changed in fuel: | |
status: | New → Confirmed |
assignee: | nobody → Fuel Python Team (fuel-python) |
tags: | removed: area-docs |
Changed in fuel: | |
importance: | Medium → High |
Scale env with 200 nodes was idle due to this bug during 3 days. May be it's not critical for product, but it's definitely critical for scale testing - we have very strict schedule.
Other QA teams also facing this bug from time to time. So, please raise priority and fix it ASAP.