Installation of Focal on a linux raid consistently fails
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
curtin (Ubuntu) |
Fix Released
|
High
|
Unassigned | ||
subiquity (Ubuntu) |
Fix Released
|
Undecided
|
Unassigned |
Bug Description
During installation of Focal, I choose to set up custom partitioning. I have two disks, and I setup these two discs in a linux raid.
In addition to the autogenerated GRUB partitions which are created when i designate each of the two disks as boot disks, I create an additional partition which takes part in the raid device (md0).
Just after the installer attempts to begin installation, the installer fails. The most descriptive part of the crash report is :
comparing device lists: expected: ['/dev/sda2', '/dev/sda2'] found: ['/dev/sdb2', '/dev/sda2']
An error occured handling 'raid-md127': ValueError - RAID array device list does not match. Missing: set() Extra: {'/dev/sdb2'}
In an attempt to work around this, I jumped into a shell just after the installer started. I then created a linux raid manually, and once a raid was in place, proceeded with the installer. I then chose to install Ubuntu onto the existing raid. The same error was given.
It seems that no matter what I try, the installer fails when I attempt to install onto a linux raid.
Installation on a plain partition works fine.
Related branches
- Server Team CI bot: Needs Fixing (continuous-integration)
- Chad Smith: Approve
-
Diff: 10726 lines (+6948/-918)83 files modifiedMakefile (+15/-6)
curtin/__init__.py (+2/-0)
curtin/block/__init__.py (+190/-12)
curtin/block/bcache.py (+242/-1)
curtin/block/clear_holders.py (+34/-51)
curtin/block/lvm.py (+43/-6)
curtin/block/mdadm.py (+10/-25)
curtin/block/mkfs.py (+8/-2)
curtin/block/multipath.py (+90/-11)
curtin/block/schemas.py (+13/-2)
curtin/commands/apt_config.py (+1/-5)
curtin/commands/block_discover.py (+11/-2)
curtin/commands/block_meta.py (+597/-474)
curtin/commands/clear_holders.py (+44/-9)
curtin/commands/curthooks.py (+225/-38)
curtin/distro.py (+18/-1)
curtin/storage_config.py (+64/-36)
curtin/udev.py (+25/-2)
curtin/util.py (+13/-2)
debian/changelog (+58/-0)
dev/null (+0/-6)
doc/topics/storage.rst (+108/-19)
examples/tests/bcache-ceph-nvme-simple.yaml (+2/-2)
examples/tests/bcache-ceph-nvme.yaml (+2/-2)
examples/tests/filesystem_battery.yaml (+1/-0)
examples/tests/mirrorboot-uefi.yaml (+17/-1)
examples/tests/multipath-lvm-part-wipe.yaml (+125/-0)
examples/tests/multipath-lvm.yaml (+121/-0)
examples/tests/multipath.yaml (+6/-0)
examples/tests/preserve-bcache.yaml (+82/-0)
examples/tests/preserve-lvm.yaml (+77/-0)
examples/tests/preserve-partition-wipe-vg-simple.yaml (+62/-0)
examples/tests/preserve-partition-wipe-vg.yaml (+116/-0)
examples/tests/preserve-raid.yaml (+4/-2)
examples/tests/reuse-lvm-member-partition.yaml (+94/-0)
examples/tests/reuse-msdos-partitions.yaml (+77/-0)
examples/tests/reuse-raid-member-wipe-partition.yaml (+2/-0)
examples/tests/reuse-raid-member-wipe.yaml (+2/-1)
examples/tests/uefi_reuse_esp.yaml (+4/-2)
examples/tests/vmtest_defaults.yaml (+19/-1)
helpers/common (+11/-5)
tests/data/probert_storage_bogus_wwn.json (+1258/-0)
tests/data/probert_storage_nvme_multipath.json (+310/-0)
tests/data/udevadm_info_sandisk_cruzer.txt (+54/-0)
tests/unittests/helpers.py (+52/-3)
tests/unittests/test_apt_custom_sources_list.py (+2/-0)
tests/unittests/test_apt_source.py (+2/-0)
tests/unittests/test_block.py (+104/-5)
tests/unittests/test_block_lvm.py (+31/-1)
tests/unittests/test_block_mdadm.py (+8/-6)
tests/unittests/test_block_mkfs.py (+9/-0)
tests/unittests/test_block_multipath.py (+45/-0)
tests/unittests/test_block_zfs.py (+24/-20)
tests/unittests/test_clear_holders.py (+51/-37)
tests/unittests/test_commands_block_meta.py (+885/-7)
tests/unittests/test_commands_clear_holders.py (+24/-0)
tests/unittests/test_commands_collect_logs.py (+2/-0)
tests/unittests/test_curthooks.py (+483/-8)
tests/unittests/test_distro.py (+51/-2)
tests/unittests/test_gpg.py (+50/-44)
tests/unittests/test_make_dname.py (+3/-3)
tests/unittests/test_storage_config.py (+189/-3)
tests/unittests/test_udev.py (+43/-6)
tests/unittests/test_util.py (+25/-2)
tests/vmtests/__init__.py (+53/-17)
tests/vmtests/releases.py (+20/-3)
tests/vmtests/test_basic.py (+33/-19)
tests/vmtests/test_fs_battery.py (+7/-0)
tests/vmtests/test_mdadm_bcache.py (+34/-0)
tests/vmtests/test_multipath.py (+80/-3)
tests/vmtests/test_multipath_lvm.py (+76/-0)
tests/vmtests/test_preserve_bcache.py (+67/-0)
tests/vmtests/test_preserve_lvm.py (+80/-0)
tests/vmtests/test_preserve_partition_wipe_vg.py (+59/-0)
tests/vmtests/test_reuse_lvm_member.py (+34/-0)
tests/vmtests/test_reuse_msdos_partitions.py (+31/-0)
tests/vmtests/test_reuse_raid_member.py (+1/-1)
tests/vmtests/test_ubuntu_core.py (+9/-0)
tools/block-discover-to-config (+1/-1)
tools/schema-validate-storage (+1/-0)
tools/vmtest-filter (+2/-1)
tools/vmtest-sync-images (+1/-0)
tox.ini (+19/-0)
- Server Team CI bot: Approve (continuous-integration)
- Michael Hudson-Doyle: Approve
- curtin developers: Pending requested
-
Diff: 1354 lines (+1312/-1)3 files modifiedcurtin/storage_config.py (+9/-1)
tests/data/probert_storage_bogus_wwn.json (+1258/-0)
tests/unittests/test_storage_config.py (+45/-0)
Changed in subiquity (Ubuntu): | |
status: | New → Triaged |
Wow what a confusing error message, I'm sorry about this!
This seems to be caused by the fact that both disks in the curtin config have the same wwn:
- {ptable: gpt, serial: Corsair_ Force_GS_ 132079070000974 10026, wwn: '0x000000000000 0000', Force_GS_ 132079070000974 1003D, wwn: '0x000000000000 0000',
path: /dev/sda, preserve: true, name: '', grub_device: true, type: disk, id: disk-sda}
- {ptable: gpt, serial: Corsair_
path: /dev/sdb, preserve: true, name: '', grub_device: true, type: disk, id: disk-sdb}
This seems to come straight from the udev database:
P: /devices/ pci0000: 00/0000: 00:1f.2/ ata1/host0/ target0: 0:0/0:0: 0:0/block/ sda 0x0000000000000 000 WITH_EXTENSION= 0x0000000000000 000
...
E: ID_WWN=
E: ID_WWN_
(and same for sdb). This in turn seems to be because SCSI_IDENT_ LUN_NAA_ LOCAL is 0000000000000000 (from 55-scsi- sg3_id. rules) for both drives and I'm fairly sure that's something sg_inq --export spits out but now we're really running into the limits of my knowledge I'm afraid. There's a bug somewhere, maybe in your drives firmware or maybe in sg3-utils...