It is possible to remove a node from UI while provisioning
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Invalid
|
Medium
|
Fuel Python (Deprecated) |
Bug Description
Due to bug https:/
I think, we need to add some checks to the backend that verify all requests from the UI to avoid such issues. Now this issue is caused by the bug on the UI, but who knows what can happen in the future.
Reproduces on:
{"build_id": "2014-03-
Steps to reproduce (KVM):
1. Create HA cluster with 3 controllers and 1 compute.
2. Remember the MAC-address of one of the controllers.
3. Click "Deploy Changes".
4. Due to https:/
- the page will be updated and there will be provisioning of HA cluster with 2 controllers (see screenshots)
5. Wait until cluster is deployed, check if the node is visible by cobbler, ssh to it, run OSTF for HA.
- there will be no errors, the nodes will become ready, HA OSTF tests will pass, because on the backend the node still remains in the cluster.
6. Delete cluster.
7. Create new cluster.
8. Go to the "Assign Roles" tab.
Expected result:
The node that was deleted from HA cluster is in bootstrapped state.
Actual result:
The node that was deleted is in ready state, if performing ssh to it, it can be seen that OS is still installed.
Changed in fuel: | |
importance: | High → Medium |
Changed in fuel: | |
status: | Incomplete → Invalid |
Basically the problem lies in that we have to decide which nodes to bootstrap not according to the information from UI, but to handle it some other way.