Storage nodes go for extra reboot after unlock (manifest apply failure)
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
StarlingX |
Fix Released
|
Medium
|
Daniel Badea |
Bug Description
Brief Description
-----------------
Storage nodes go for extra reboot after unlock, after assigning OSD in newly created storage tier
Severity
--------
Major
Steps to Reproduce
------------------
1. Create a new storage tier
2. Lock a storage node
3. Assign an unused OSD on that storage node to the newly created storage tier
4. Unlock the storage node*
5. Repeat for all storage nodes
*On reboot, there is a failure displayed on the console: failed to apply puppet manifest. The storage node goes for a reboot again. Eventually, the node does come up.
Looking at the puppet manifest on storage-0, the following is seen:
2018-10-
Expected Behavior
------------------
A single reboot cycle should bring up the node and no manifest failures should be seen.
Actual Behavior
----------------
Manifest failures seen on unlock resulting in additional reboot.
Reproducibility
---------------
Extra reboot seen on multiple storage nodes in the system.
System Configuration
-------
Multi-node storage system
Branch/Pull Time/Commit
-------
master as of 2018-10-09_01-52-01
Timestamp/Logs
--------------
2018-10-
tags: |
added: stx.2019.05 removed: stx.2019.03 |
tags: |
added: stx.2.0 removed: stx.2019.05 |
stx.2019.03 - specific to storage tiers; system still comes up after extra reboot. Not required for stx.2018.10