failed creating LVM on top of md devices

Bug #1783413 reported by Joshua Powers on 2018-07-24
This bug affects 2 people
Affects Status Importance Assigned to Milestone

Bug Description

Tried creating a LVM group on top of two software raid devices and the install failed

Expected Behavior:
Install works and reboots into a system with a /home mounted on LVM + RAID

Actual Behavior:

Running command ['vgcreate', '--force', '--zero=y', '--yes', 'vg0', '/dev/md0', '/dev/md1'] with allowed return codes [0] (capture=True)
        An error occured handling 'vg-0': ProcessExecutionError - Unexpected error while running command.
        Command: ['vgcreate', '--force', '--zero=y', '--yes', 'vg0', '/dev/md0', '/dev/md1']
        Exit code: 5
        Reason: -
        Stdout: ''
        Stderr: WARNING: Device for PV Ku4Qjy-hGry-zaUE-el9b-Vee1-c0eB-43PGwR not found or rejected by a filter.
                  WARNING: Device for PV Ku4Qjy-hGry-zaUE-el9b-Vee1-c0eB-43PGwR not found or rejected by a filter.
                  A volume group called vg0 already exists.

Steps to reproduce:
1. Get Bionic live-server ISO from July 24, 2018 (20180724)
2. Launch a VM with 1x QEMU disk (for root) and 5x libvirt disks for raid
3. Accept defaults till disk and then go manual
4. Setup qemu disk with ext4 mounted at /
5. Setup 2x md devices: 2 disks each, mirrored
6. Setup LVM using the 2x md devices
7. Create volume group on the LVM device with ext4 mounted at /home
8. Install fails


Related branches

Ryan Harper (raharper) wrote :

A volume group called vg0 already exists.

Hrm, looking at the curtin log, it didn't detect a vg0 volume group.

Current device storage tree:
|-- sda2
`-- sda1
`-- md127
`-- md127
`-- md127
`-- md127
Shutdown Plan:
{'level': 1, 'device': '/sys/class/block/sda/sda2', 'dev_type': 'partition'}
{'level': 1, 'device': '/sys/class/block/sda/sda1', 'dev_type': 'partition'}
{'level': 1, 'device': '/sys/class/block/md127', 'dev_type': 'raid'}
{'level': 0, 'device': '/sys/class/block/sda', 'dev_type': 'disk'}
{'level': 0, 'device': '/sys/class/block/vdb', 'dev_type': 'disk'}
{'level': 0, 'device': '/sys/class/block/vda', 'dev_type': 'disk'}
{'level': 0, 'device': '/sys/class/block/vdc', 'dev_type': 'disk'}
{'level': 0, 'device': '/sys/class/block/vdd', 'dev_type': 'disk'}

We wipe the contents of the assembled raid device, so if that previously held lvm info; that'll get wiped. We run a lvm scan to see if any devices are present but none showed.

If you still have access to system, can you run a few commands:

pvs -a, vgs -a, lvs -a
dmsetup ls
dmsetup status

Joshua Powers (powersj) wrote :

sudo pvs -a:
sudo vgs -a:
sudo lvs -a: empty
sudo dmsetup ls: No devices found
sudo dmsetup status: No devices found

Ryan Harper (raharper) wrote :

OK, I've taken this configuration and recreated the issue with our vmtest; we create the storage config, then tear down the lvm and raid (removing configurations from /etc/lvm and /etc/mdadm) then run the storage config a second time. We see the same storage tree (missing the lvm dm0 entries) and then trigger the vgcreate error where vg0 already exists).

The fix is for curtin to add an lvm scan of block devices after assembling raid arrays; This scan will enable any lvm vgs that can be assembled. Then once curtin can "see" the lvm devices, it will properly remove them.

Changed in curtin:
status: New → Confirmed
Ryan Harper (raharper) on 2018-07-26
Changed in curtin:
status: Confirmed → In Progress
Scott Moser (smoser) on 2018-07-27
Changed in curtin:
importance: Undecided → Medium

This bug is fixed with commit 6a776e15 to curtin on branch master.
To view that commit see the following URL:

Changed in curtin:
status: In Progress → Fix Committed

This bug is believed to be fixed in curtin in version 18.2. If this is still a problem for you, please make a comment and set the state back to New

Thank you.

Changed in curtin:
status: Fix Committed → Fix Released
Changed in subiquity:
status: New → Fix Released
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers