failed creating LVM on top of md devices
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
curtin |
Fix Released
|
Medium
|
Unassigned | ||
subiquity |
Fix Released
|
Undecided
|
Unassigned |
Bug Description
Summary:
Tried creating a LVM group on top of two software raid devices and the install failed
Expected Behavior:
Install works and reboots into a system with a /home mounted on LVM + RAID
Actual Behavior:
Running command ['vgcreate', '--force', '--zero=y', '--yes', 'vg0', '/dev/md0', '/dev/md1'] with allowed return codes [0] (capture=True)
An error occured handling 'vg-0': ProcessExecutio
Command: ['vgcreate', '--force', '--zero=y', '--yes', 'vg0', '/dev/md0', '/dev/md1']
Exit code: 5
Reason: -
Stdout: ''
Stderr: WARNING: Device for PV Ku4Qjy-
A volume group called vg0 already exists.
Steps to reproduce:
1. Get Bionic live-server ISO from July 24, 2018 (20180724)
2. Launch a VM with 1x QEMU disk (for root) and 5x libvirt disks for raid
3. Accept defaults till disk and then go manual
4. Setup qemu disk with ext4 mounted at /
5. Setup 2x md devices: 2 disks each, mirrored
6. Setup LVM using the 2x md devices
7. Create volume group on the LVM device with ext4 mounted at /home
8. Install fails
Logs:
curtin: http://
subiquity-
subiquity-
Related branches
- Server Team CI bot: Approve (continuous-integration)
- curtin developers: Pending requested
-
Diff: 40 lines (+9/-3)2 files modifieddebian/changelog (+3/-2)
tests/unittests/test_block_zfs.py (+6/-1)
- Scott Moser (community): Approve
- Server Team CI bot: Approve (continuous-integration)
-
Diff: 1195 lines (+626/-86)23 files modifiedcurtin/block/__init__.py (+14/-0)
curtin/block/clear_holders.py (+11/-9)
curtin/block/lvm.py (+23/-5)
curtin/block/mdadm.py (+2/-3)
curtin/block/zfs.py (+18/-7)
curtin/commands/block_meta.py (+5/-3)
curtin/commands/install.py (+2/-1)
curtin/log.py (+43/-0)
curtin/udev.py (+2/-0)
curtin/util.py (+33/-8)
debian/changelog (+11/-0)
examples/tests/dirty_disks_config.yaml (+30/-3)
examples/tests/lvmoverraid.yaml (+98/-0)
examples/tests/vmtest_defaults.yaml (+14/-0)
tests/unittests/test_block.py (+35/-0)
tests/unittests/test_block_lvm.py (+13/-13)
tests/unittests/test_block_mdadm.py (+4/-5)
tests/unittests/test_block_zfs.py (+80/-24)
tests/unittests/test_clear_holders.py (+60/-3)
tests/unittests/test_util.py (+62/-0)
tests/vmtests/__init__.py (+15/-1)
tests/vmtests/test_lvm_raid.py (+50/-0)
tests/vmtests/test_lvm_root.py (+1/-1)
- Chad Smith: Disapprove
- Server Team CI bot: Approve (continuous-integration)
-
Diff: 772 lines (+412/-43)19 files modifiedcurtin/block/clear_holders.py (+3/-0)
curtin/block/lvm.py (+23/-5)
curtin/block/mdadm.py (+2/-3)
curtin/commands/block_meta.py (+2/-1)
curtin/commands/install.py (+2/-1)
curtin/log.py (+43/-0)
curtin/udev.py (+2/-0)
curtin/util.py (+33/-8)
debian/changelog (+10/-0)
examples/tests/dirty_disks_config.yaml (+30/-3)
examples/tests/lvmoverraid.yaml (+98/-0)
examples/tests/vmtest_defaults.yaml (+14/-0)
tests/unittests/test_block_lvm.py (+13/-13)
tests/unittests/test_block_mdadm.py (+4/-5)
tests/unittests/test_clear_holders.py (+5/-2)
tests/unittests/test_util.py (+62/-0)
tests/vmtests/__init__.py (+15/-1)
tests/vmtests/test_lvm_raid.py (+50/-0)
tests/vmtests/test_lvm_root.py (+1/-1)
- Server Team CI bot: Approve (continuous-integration)
- Scott Moser (community): Approve
-
Diff: 353 lines (+223/-24)8 files modifiedcurtin/block/clear_holders.py (+3/-0)
curtin/block/lvm.py (+23/-5)
examples/tests/dirty_disks_config.yaml (+30/-3)
examples/tests/lvmoverraid.yaml (+98/-0)
tests/unittests/test_block_lvm.py (+13/-13)
tests/unittests/test_clear_holders.py (+5/-2)
tests/vmtests/test_lvm_raid.py (+50/-0)
tests/vmtests/test_lvm_root.py (+1/-1)
Changed in curtin: | |
status: | Confirmed → In Progress |
Changed in curtin: | |
importance: | Undecided → Medium |
Changed in subiquity: | |
status: | New → Fix Released |
A volume group called vg0 already exists.
Hrm, looking at the curtin log, it didn't detect a vg0 volume group.
Current device storage tree: block/sda/ sda2', 'dev_type': 'partition'} block/sda/ sda1', 'dev_type': 'partition'} block/md127' , 'dev_type': 'raid'} block/sda' , 'dev_type': 'disk'} block/vdb' , 'dev_type': 'disk'} block/vda' , 'dev_type': 'disk'} block/vdc' , 'dev_type': 'disk'} block/vdd' , 'dev_type': 'disk'}
sda
|-- sda2
`-- sda1
vdb
`-- md127
vda
`-- md127
vdc
`-- md127
vdd
`-- md127
Shutdown Plan:
{'level': 1, 'device': '/sys/class/
{'level': 1, 'device': '/sys/class/
{'level': 1, 'device': '/sys/class/
{'level': 0, 'device': '/sys/class/
{'level': 0, 'device': '/sys/class/
{'level': 0, 'device': '/sys/class/
{'level': 0, 'device': '/sys/class/
{'level': 0, 'device': '/sys/class/
We wipe the contents of the assembled raid device, so if that previously held lvm info; that'll get wiped. We run a lvm scan to see if any devices are present but none showed.
If you still have access to system, can you run a few commands:
pvs -a, vgs -a, lvs -a
dmsetup ls
dmsetup status