during shutdown libvirt-guests gets stopped after file system unmount
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
lvm2 |
New
|
Unknown
|
|||
libvirt (Ubuntu) |
Incomplete
|
Undecided
|
Unassigned | ||
lvm2 (Fedora) |
In Progress
|
High
|
|||
lvm2 (Ubuntu) |
New
|
Undecided
|
Unassigned |
Bug Description
When using automatic suspend at reboot/shutdown, it makes sense to store the suspend data on a separate partition to ensure there is always enough available space. However this does not work, as the partition gets unmounted before or during libvirt suspend.
Steps to reproduce:
1. Use Ubuntu 18.04.02 LTS
2. Install libvirt + qemu-kvm
3. Start a guest
4. Set libvirt-guests to suspend at shutdown/reboot by editing /etc/default/
5. Create a fstab entry to mount a separate partition to mount point /var/lib/
6. Reboot
Expected result:
The guest suspend data would be written to the /var/lib/
Actual result:
The partition gets unmounted before libvirt-guests suspends the guests, resulting in the data being stored on the partition containing the root file system. During boot, the empty partition gets mounted over the non-empty /var/lib/
As a side effect, the saved data is using up space on the root partition even if the directory appears empty.
Here is some of the relevant lines from the journal:
Jun 14 00:00:04 libvirt-host blkdeactivate[
Jun 14 00:00:04 libvirt-host systemd[1]: Unmounted /var/lib/
Jun 14 00:00:04 libvirt-host blkdeactivate[
Jun 14 00:00:04 libvirt-host libvirt-
Jun 14 00:00:04 libvirt-host blkdeactivate[
Jun 14 00:00:05 libvirt-host libvirt-
Jun 14 00:00:05 libvirt-host libvirt-
Jun 14 00:00:05 libvirt-host blkdeactivate[
Jun 14 00:00:10 libvirt-host libvirt-
Jun 14 00:00:15 libvirt-host libvirt-
Jun 14 00:00:20 libvirt-host libvirt-
tags: | added: server-triage-discuss |
tags: | removed: server-triage-discuss |
Changed in lvm2: | |
status: | Unknown → New |
Changed in lvm2 (Fedora): | |
importance: | Unknown → High |
status: | Unknown → Confirmed |
Changed in lvm2 (Fedora): | |
status: | Confirmed → In Progress |
Description of problem:
The blk-availabilit y.service unit is activated automatically when multipathd is enabled, even if multipathd is finally not used.
This leads to the blk-availability service to unmount file systems too early, breaking unit ordering and leading to shutdown issues of custom services requiring some mount points.
Version-Release number of selected component (if applicable):
device- mapper- 1.02.149- 10.el7_ 6.3.x86_ 64
How reproducible:
Always
Steps to Reproduce:
1. Enable multipathd even though there is no multipath device
# yum -y install device- mapper- multipath
# systemctl enable multipathd --now
2. Create a custom mount point "/data"
# lvcreate -n data -L 1G rhel rhel-data /data xfs defaults 0 0" >> /etc/fstab
# mkfs.xfs /dev/rhel/data
# mkdir /data
# echo "/dev/mapper/
# mount /data
3. Create a custom service requiring mount point "/data"
# cat > /etc/systemd/ system/ my.service << EOF or=/data
[Unit]
RequiresMountsF
[Service] =true
ExecStart=/bin/bash -c 'echo "STARTING"; mountpoint /data; true'
ExecStop=/bin/bash -c 'echo "STOPPING IN 5 SECONDS"; sleep 5; mountpoint /data; true'
Type=oneshot
RemainAfterExit
[Install] default. target
WantedBy=
EOF
# systemctl daemon-reload
# systemctl enable my.service --now
4. Set up persistent journal and reboot
# mkdir -p /var/log/journal
# systemctl restart systemd-journald
# reboot
5. Check the previous boot's shutdown
# journalctl -b -1 -o short-precise -u my.service -u data.mount -u blk-availabilit y.service
Actual results:
-- Logs begin at Thu 2019-04-18 12:48:12 CEST, end at Thu 2019-04-18 13:35:50 CEST. -- 3395]: Deactivating block devices: 3395]: [SKIP]: unmount of rhel-swap (dm-1) mounted on [SWAP] 3395]: [UMOUNT]: unmounting rhel-data (dm-2) mounted on /data... done 3395]: [SKIP]: unmount of rhel-root (dm-0) mounted on /
Apr 18 13:31:46.933571 vm-blkavail7 systemd[1]: Started Availability of block devices.
Apr 18 13:31:48.452326 vm-blkavail7 systemd[1]: Mounting /data...
Apr 18 13:31:48.509633 vm-blkavail7 systemd[1]: Mounted /data.
Apr 18 13:31:48.856228 vm-blkavail7 systemd[1]: Starting my.service...
Apr 18 13:31:48.894419 vm-blkavail7 bash[2856]: STARTING
Apr 18 13:31:48.930270 vm-blkavail7 bash[2856]: /data is a mountpoint
Apr 18 13:31:48.979457 vm-blkavail7 systemd[1]: Started my.service.
Apr 18 13:35:02.544999 vm-blkavail7 systemd[1]: Stopping my.service...
Apr 18 13:35:02.547811 vm-blkavail7 systemd[1]: Stopping Availability of block devices...
Apr 18 13:35:02.639325 vm-blkavail7 bash[3393]: STOPPING IN 5 SECONDS
Apr 18 13:35:02.760043 vm-blkavail7 blkdeactivate[
Apr 18 13:35:02.827170 vm-blkavail7 blkdeactivate[
Apr 18 13:35:02.903924 vm-blkavail7 systemd[1]: Unmounted /data.
Apr 18 13:35:02.988073 vm-blkavail7 blkdeactivate[
Apr 18 13:35:02.988253 vm-blkavail7 blkdeactivate[
Apr 18 13:35:03.083448 vm-blkavail7 systemd[1]: Stopped Availability of block devices.
Apr 18 13:35:07.693154 vm-blkavail7 bash[3393]: /data is not a mountpoint
Apr 18 13:35:07.696330 vm-blkavail7 systemd[1]: Stopped my.service.
--> We can see the following:
- blkdeactivate runs, unmounting /data, even though my.service is still running (hence the unexpected ...