RAID/LVM configuration doesn't display drives in selection menu - expected MD_LEVEL

Bug #1975860 reported by computer_freak_8
14
This bug affects 1 person
Affects Status Importance Assigned to Milestone
subiquity (Ubuntu)
In Progress
Undecided
Unassigned

Bug Description

Trying to install 22.04 LTS Server from Live image. Both the subiquity version provided with the ISO and the "new" version it prompts to load from internet, appear to have this issue.

I can select items for a RAID/LVM volume, but they are not displayed; they show up as blank lines but the total size at the bottom changes as semi-blindly select/deselect them.

I'm not sure how to grab versions as specified since this is prior to having a working system.

Image of problem attached.

Tags: fr-2620
Revision history for this message
computer_freak_8 (j8t8b) wrote :
affects: ubiquity (Ubuntu) → subiquity (Ubuntu)
Revision history for this message
Dan Bungert (dbungert) wrote :

Would you upload a copy of what /var/log/installer/subiquity-server-debug.log points to?

Revision history for this message
computer_freak_8 (j8t8b) wrote :

Ahh, gotcha, I was able to transfer it via SSH, attached here.
This is a different run - but this installer crashed on its own after sitting overnight.

Dan Bungert (dbungert)
tags: added: fr-2620
Olivier Gayot (ogayot)
summary: - RAID/LVM configuration doesn't display drives in selection menu
+ RAID/LVM configuration doesn't display drives in selection menu -
+ expected MD_LEVEL
Revision history for this message
Olivier Gayot (ogayot) wrote (last edit ):

Another report was opened by @nkshirsagar on GitHub (probert) after a customer case: https://github.com/canonical/probert/issues/125

Marking confirmed.

Nikhil also mentioned that

> udevadm does not show the MD_LEVEL information on an inactive RAID, which breaks the probe.

and a corresponding PR was opened: https://github.com/canonical/probert/pull/124

Changed in subiquity (Ubuntu):
status: New → Confirmed
Revision history for this message
nikhil kshirsagar (nkshirsagar) wrote :

It's expected behavior for MD_LEVEL not to be set for inactive raid devices. Looking at 64-md-raid.rules,

# do not edit this file, it will be overwritten on update

SUBSYSTEM!="block", GOTO="md_end"

# handle md arrays
ACTION!="add|change", GOTO="md_end"
KERNEL!="md*", GOTO="md_end"

# partitions have no md/{array_state,metadata_version}, but should not
# for that reason be ignored.
ENV{DEVTYPE}=="partition", GOTO="md_ignore_state"

# container devices have a metadata version of e.g. 'external:ddf' and
# never leave state 'inactive'
ATTR{md/metadata_version}=="external:[A-Za-z]*", ATTR{md/array_state}=="inactive", GOTO="md_ignore_state" <==
TEST!="md/array_state", ENV{SYSTEMD_READY}="0", GOTO="md_end"
ATTR{md/array_state}=="|clear|inactive", ENV{SYSTEMD_READY}="0", GOTO="md_end"
LABEL="md_ignore_state"

IMPORT{program}="/sbin/mdadm --detail --export $devnode"
ENV{DEVTYPE}=="disk", ENV{MD_NAME}=="?*", SYMLINK+="disk/by-id/md-name-$env{MD_NAME}", OPTIONS+="string_escape=replace"
<SNIP>

LABEL="md_end"

So IMPORT{program}="/sbin/mdadm --detail --export $devnode" is skipped if the array is inactive.

Dan Bungert (dbungert)
Changed in subiquity (Ubuntu):
status: Confirmed → In Progress
Revision history for this message
nikhil kshirsagar (nkshirsagar) wrote :
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.