Reset environment must be less brutal
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Invalid
|
Medium
|
Fuel Sustaining | ||
6.1.x |
Won't Fix
|
Medium
|
MOS Maintenance | ||
7.0.x |
Won't Fix
|
Medium
|
MOS Maintenance | ||
8.0.x |
Won't Fix
|
Medium
|
MOS Maintenance | ||
Mitaka |
Invalid
|
Critical
|
Sergii Rizvan |
Bug Description
So,
1) customer accidentally reset the environment.
2) half of the support team spending tens of hours on webex in attempts to restore it, because fuel wipes first several megabytes of disk during reset.
It is not the first time when customer's production environment was reset.
This code in all versions 6.0-8.0 erases several megabytes of disk, which cause data loss.
Current code wipes only mbr
https:/
(after this commit https:/
and it leaves possibility to restore data (see comment #6)
Ask:
1) to backport this code to previous releases using MU, to prevent possibility of data loss for existing customers
2) to create backups of MBR and store them on fuel master before environment erase(https:/
Related conversation:
http://
Changed in fuel: | |
milestone: | none → 10.0 |
assignee: | nobody → Fuel Sustaining (fuel-sustaining-team) |
tags: | added: area-library |
Changed in fuel: | |
importance: | Undecided → Medium |
status: | New → Confirmed |
summary: |
- Reset environment should be less brutal + Reset environment must be less brutal |
description: | updated |
Changed in fuel: | |
status: | Confirmed → Invalid |
tags: | added: customer-found |
tags: | added: support |
description: | updated |
On 8.0 I have tried to make backup of gpt and lvm metadata, and even with those backups restorations works not very well /tmp/$( hostname -s)_vda_backup' ; done /tmp/*_ vda_backup . ; done
Make gpt backup:
[root@fuel ~]# for i in $(seq 1 5); do ssh node-$i 'sgdisk /dev/vda --backup=
The operation has completed successfully.
The operation has completed successfully.
The operation has completed successfully.
The operation has completed successfully.
The operation has completed successfully.
Copy it to master node:
for i in $(seq 1 5); do scp node-$i:
Make lvm metadata backup: /etc/lvm/ backup node-${ i}_lvm_ backup/ ; done
for i in $(seq 1 5); do scp -r node-$i:
[root@fuel ~]# ls -lad *backup
drwx------ 2 root root 4096 Jun 29 12:10 node-1_lvm_backup
-rw-r--r-- 1 root root 17920 Jun 29 12:07 node-1_vda_backup
drwx------ 2 root root 4096 Jun 29 12:10 node-2_lvm_backup
-rw-r--r-- 1 root root 17920 Jun 29 12:07 node-2_vda_backup
drwx------ 2 root root 4096 Jun 29 12:10 node-3_lvm_backup
-rw-r--r-- 1 root root 17920 Jun 29 12:07 node-3_vda_backup
drwx------ 2 root root 4096 Jun 29 12:10 node-4_lvm_backup
-rw-r--r-- 1 root root 17920 Jun 29 12:07 node-4_vda_backup
drwx------ 2 root root 4096 Jun 29 12:10 node-5_lvm_backup
-rw-r--r-- 1 root root 17920 Jun 29 12:07 node-5_vda_backup
After that I reset environment, copied backups back: i}_vda_ backup node-${i}:/root/ ; done i}_lvm_ backup node-${ i}:/etc/ lvm/backup ; done
[root@fuel ~]# for i in $(seq 1 5); do scp node-${
[root@fuel ~]# for i in $(seq 1 5); do scp -r node-${
After that I restored gpt: backup= /root/node- ${i}_vda_ backup /dev/vda" ; done
[root@fuel ~]# for i in $(seq 1 5); do ssh node-$i "sgdisk --load-
Lvm restoration was a little bit harder, and I managed to restore root partiotion, which is ext4. But data on xfs partitions /var/lib/nova and ceph osds was lost.