Existing Logical Volumes on multipathed SCSI disks are not automatically activated during installation

Bug #1570580 reported by bugproxy
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Ubuntu on IBM z Systems
Fix Released
High
Unassigned
debian-installer (Ubuntu)
Fix Released
High
Dimitri John Ledkov

Bug Description

Installer version: 447
Kernel: 4.4.0.18
Description/Reproduction:
Logical Volumes that exist on multipathed SCSI disks from a previous installation are not automatically activated and thus can not be reused. Only the partitions are visible and that their type is "lvm" (s. screenshot).

Workaround:

When the partitioned disks are shown in the "Partition disks" menu, open a shell. Then list all logical volumes with command lvs, and set each logical volume active with lvchange -ay <volumegroupname>/<lvolname>. Choose "Configure the Logical Volume Manager", confirm keeping current partition layout with "Yes" and "Finish" without changing anything. Then the Logical Volumes are displayed in the "Partition disks" menu and filesystems can be created and mountpoints defined.

I will attach syslog and partman of the installation attempt.

So obviously an lvchange -ay on all detected lolgical volumes is missing at the proper time....

Revision history for this message
bugproxy (bugproxy) wrote : syslog partman and screenshot

Default Comment by Bridge

tags: added: architecture-s39064 bugnameltc-140322 severity-high targetmilestone-inin1604
Changed in ubuntu:
assignee: nobody → Skipper Bug Screeners (skipper-screen-team)
Luciano Chavez (lnx1138)
affects: ubuntu → debian-installer (Ubuntu)
Revision history for this message
Dimitri John Ledkov (xnox) wrote :

This is conflict with previous bug reports. So originally / earlier in xenial release cycle we were shipping udev rules to automatically activate all discovered LVM groups and volumes. This then resulted in disk drives to be already in use, and failure to e.g. dasdfmt a dasd drive. There was no way to deactivate existing bug reports. My understanding was that there is / was a d-i way to pressed or activate all the available volume groups, I shall double check that. At the moment, I am inclined to say it is more important to keep the use-case "wipe any old install, install with new partitioning layouts", where old install may or may not have lvm based installation.

Are you performing manual installations, or are you preseeding automated installations and want to reuse lvm groups/volumes? If you are automating this advance setup I highly recommend to add relevant shell commands in partman/early_command as described in the installation guide at https://help.ubuntu.com/16.04/installation-guide/s390x/apbs05.html#preseed-hooks

Why is this bug report filed at severity-high? Has hws signed-off on this? "bugproxy" is not really authorised to open bug reports at severity high, they should be escalated to high, via PM coordination between hws and lou.

We are effectively past final freeze now, thus everything from now on should be filed as targetmilestone-inin1610, with expectation that it can only be SRUed into xenial.

Revision history for this message
bugproxy (bugproxy) wrote : Comment bridged from LTC Bugzilla

------- Comment From <email address hidden> 2016-04-15 02:13 EDT-------
We are after final freeze. I changed the targetmilestone to 16.10...

tags: added: targetmilestone-inin1610
removed: targetmilestone-inin1604
bugproxy (bugproxy)
tags: added: targetmilestone-inin16041
removed: targetmilestone-inin1610
dann frazier (dannf)
Changed in ubuntu-z-systems:
importance: Undecided → High
bugproxy (bugproxy)
tags: added: severity-medium
removed: severity-high
Revision history for this message
Dimitri John Ledkov (xnox) wrote :

Could you please explain exact user scenarios and user stories here. Is this just a hypothetical use-case / testing or some real scenarios that you have in mind.

In Ubuntu and Debian, in general we do not activate existing volume groups and volumes, because the idea is that in general one installs things afresh and doesn't reuse/resize existing things. Especially not LVM groups. We even opportunistically wipe lvm metadata off the drives before guided full disk install.

There is no current support in e.g. debian and ubuntu, to reuse lvm groups and volumes per se.

There are a couple things we could do, but to get this right we'd want to know why or how you are envisioning this to be used. In general, no users share a volume group across multiple installs. So in various scenarios, where one does and doesn't have existing volume groups, when would you want to activate them and not, and why. And if we do activate an existing volume group, how do you see it booting and where would the zipl installation go? Do you expect it to install with guided partition? and how? Please elaborate motivation behind activating existing volume group from base principals.

Changed in debian-installer (Ubuntu):
status: New → Incomplete
Changed in ubuntu-z-systems:
status: New → Incomplete
importance: High → Medium
Revision history for this message
bugproxy (bugproxy) wrote :

------- Comment From <email address hidden> 2016-08-29 10:38 EDT-------
(In reply to comment #7)
> In Ubuntu and Debian, in general we do not activate existing volume groups
> and volumes, because the idea is that in general one installs things afresh
> and doesn't reuse/resize existing things. Especially not LVM groups. We even
> opportunistically wipe lvm metadata off the drives before guided full disk
> install.
>
> There is no current support in e.g. debian and ubuntu, to reuse lvm groups
> and volumes per se.

Could you point me to where this behaviour is documented? I was researching but could not find anything

Revision history for this message
Dimitri John Ledkov (xnox) wrote :

I based my comment on extensive knowledge of the debian-installer / partman source code =)

There are bug reports about it open in debian BTS, e.g. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=451535

And do note that this is during the debian-installer, post-install system can access and active whichever lvm groups/volumes desired and be setup as mountpoints in /etc/fstab as desired. Similarly, one can preseed arbitrary shell commands to achieve similar either as post installation exec commands, or partman early. But that's not declarative preseed syntax to e.g. active this group, reuse that volume.

I am more interested in production deployment examples that reuse volume groups across multiple different distributions and how zipl/boot is managed on those scenarios.

Or is this request to active volume groups just to prevent data-loss?
Activated volume groups, would put block devices in use, potentially preventing them from being formatted. (That is especially the case if only partial number of physical devices are activated of a particular volume group)

Revision history for this message
bugproxy (bugproxy) wrote :

------- Comment From <email address hidden> 2016-10-24 06:25 EDT-------
(In reply to comment #10)

> Or is this request to active volume groups just to prevent data-loss?
> Activated volume groups, would put block devices in use, potentially
> preventing them from being formatted. (That is especially the case if only
> partial number of physical devices are activated of a particular volume
> group)

Yes. Assume, somebody is migrating from another distro to Xenial Server and wants to keep his home/data LVs. In that case it is highly appreciable to preserve the LVs and mount them without formatting.

Suggestion:
Activate the VGs/LVs, ask the user which underlying physical device (not the physical volume) he wants to partition or format (if any). If partitioning/formatting is requested, deactivate the LV, VG and PV.
If the user wants to reuse only some of the LVs, he should have the option to select unwanted LVs for deletion and recreation.

tags: added: targetmilestone-inin16042
removed: targetmilestone-inin16041
Revision history for this message
bugproxy (bugproxy) wrote :

------- Comment From <email address hidden> 2016-11-09 07:19 EDT-------
@Xnox, does the answer provided enough information to proceed here? Many thanks in advance

Revision history for this message
Dimitri John Ledkov (xnox) wrote :

I believe i have managed to reproduce this issue now.
So logical volumes are detected correctly and are present in e.g. "Configure LVM" sub-menu; but they do not appear in the manual partitioning screen.

I did: "rm /var/lib/partman/lvm" in the d-i manual shell, and executed detect disks again and they have appeared in the manual partitioning step. So clearly, LVM detection does not happen again after multipath devices have been assembled. Investigating further now.

Changed in debian-installer (Ubuntu):
status: Incomplete → Confirmed
importance: Undecided → Medium
Frank Heimes (fheimes)
Changed in ubuntu-z-systems:
status: Incomplete → Confirmed
Revision history for this message
bugproxy (bugproxy) wrote : syslog partman and screenshot

Default Comment by Bridge

Changed in debian-installer (Ubuntu):
assignee: Skipper Bug Screeners (skipper-screen-team) → Dimitri John Ledkov (xnox)
milestone: none → ubuntu-17.07
importance: Medium → High
Frank Heimes (fheimes)
Changed in ubuntu-z-systems:
importance: Medium → High
tags: added: id-5a7c437391f01a5d57c688e6
Revision history for this message
bugproxy (bugproxy) wrote : Comment bridged from LTC Bugzilla

------- Comment From <email address hidden> 2018-05-24 03:14 EDT-------
@Canonical, please provide an update for this LP. Many thx...

Revision history for this message
Frank Heimes (fheimes) wrote :

I just tried this again on 16.04.4 and it seems to work for me:

Did a FCP (single LUN) LPAR installation on 16.04.4 and created two test files after the installation:
$ sudo vi /test1.txt
$ vi /home/ubuntu/test2.txt
Then I re-run the installation, enabled the FCP devices, selected Manual partitioning, selected the still existing ext4 root disk (#1) and just (re-)configured it as 'Use as: Ext4' (didn't do any partitioning), 'Mount point: /' and "no, keep existing data" (not recommended, just for testing here), confirmed this (also to not formatting that partition), proceeded with the installation, completed it and rebooted.
After logging in to the system again I could still find the two files - hence the old partition was really used.
For details see attached file test1.txt.

Revision history for this message
Frank Heimes (fheimes) wrote :

Now I did another test and installed the same LPAR again, but now using a bunch of DASD disks for the root fs and just added the same existing FCP LUN this time to /space, proceeded with the installation, completed it, rebooted, and now can still find the initially created two test files (and for sure the entire old root file system now located at /space).
For details see attached file test2.txt.

Revision history for this message
Frank Heimes (fheimes) wrote :

... hence I'm marking this ticket for now as fixed.

Changed in debian-installer (Ubuntu):
status: Confirmed → Fix Released
Changed in ubuntu-z-systems:
status: Confirmed → Fix Released
Revision history for this message
bugproxy (bugproxy) wrote :

------- Comment From <email address hidden> 2018-06-04 03:43 EDT-------
IBM Bugzilla status -> closed; Verified by Canonical and Fix Released

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.