Comment 14 for bug 1832859

Revision history for this message
In , rmetrich (rmetrich-redhat-bugs) wrote :

Description of problem:

The blk-availability.service unit is activated automatically when multipathd is enabled, even if multipathd is finally not used.
This leads to the blk-availability service to unmount file systems too early, breaking unit ordering and leading to shutdown issues of custom services requiring some mount points.

Version-Release number of selected component (if applicable):

device-mapper-1.02.149-10.el7_6.3.x86_64

How reproducible:

Always

Steps to Reproduce:

1. Enable multipathd even though there is no multipath device

  # yum -y install device-mapper-multipath
  # systemctl enable multipathd --now

2. Create a custom mount point "/data"

  # lvcreate -n data -L 1G rhel
  # mkfs.xfs /dev/rhel/data
  # mkdir /data
  # echo "/dev/mapper/rhel-data /data xfs defaults 0 0" >> /etc/fstab
  # mount /data

3. Create a custom service requiring mount point "/data"

  # cat > /etc/systemd/system/my.service << EOF
[Unit]
RequiresMountsFor=/data

[Service]
ExecStart=/bin/bash -c 'echo "STARTING"; mountpoint /data; true'
ExecStop=/bin/bash -c 'echo "STOPPING IN 5 SECONDS"; sleep 5; mountpoint /data; true'
Type=oneshot
RemainAfterExit=true

[Install]
WantedBy=default.target
EOF
  # systemctl daemon-reload
  # systemctl enable my.service --now

4. Set up persistent journal and reboot

  # mkdir -p /var/log/journal
  # systemctl restart systemd-journald
  # reboot

5. Check the previous boot's shutdown

  # journalctl -b -1 -o short-precise -u my.service -u data.mount -u blk-availability.service

Actual results:

-- Logs begin at Thu 2019-04-18 12:48:12 CEST, end at Thu 2019-04-18 13:35:50 CEST. --
Apr 18 13:31:46.933571 vm-blkavail7 systemd[1]: Started Availability of block devices.
Apr 18 13:31:48.452326 vm-blkavail7 systemd[1]: Mounting /data...
Apr 18 13:31:48.509633 vm-blkavail7 systemd[1]: Mounted /data.
Apr 18 13:31:48.856228 vm-blkavail7 systemd[1]: Starting my.service...
Apr 18 13:31:48.894419 vm-blkavail7 bash[2856]: STARTING
Apr 18 13:31:48.930270 vm-blkavail7 bash[2856]: /data is a mountpoint
Apr 18 13:31:48.979457 vm-blkavail7 systemd[1]: Started my.service.
Apr 18 13:35:02.544999 vm-blkavail7 systemd[1]: Stopping my.service...
Apr 18 13:35:02.547811 vm-blkavail7 systemd[1]: Stopping Availability of block devices...
Apr 18 13:35:02.639325 vm-blkavail7 bash[3393]: STOPPING IN 5 SECONDS
Apr 18 13:35:02.760043 vm-blkavail7 blkdeactivate[3395]: Deactivating block devices:
Apr 18 13:35:02.827170 vm-blkavail7 blkdeactivate[3395]: [SKIP]: unmount of rhel-swap (dm-1) mounted on [SWAP]
Apr 18 13:35:02.903924 vm-blkavail7 systemd[1]: Unmounted /data.
Apr 18 13:35:02.988073 vm-blkavail7 blkdeactivate[3395]: [UMOUNT]: unmounting rhel-data (dm-2) mounted on /data... done
Apr 18 13:35:02.988253 vm-blkavail7 blkdeactivate[3395]: [SKIP]: unmount of rhel-root (dm-0) mounted on /
Apr 18 13:35:03.083448 vm-blkavail7 systemd[1]: Stopped Availability of block devices.
Apr 18 13:35:07.693154 vm-blkavail7 bash[3393]: /data is not a mountpoint
Apr 18 13:35:07.696330 vm-blkavail7 systemd[1]: Stopped my.service.

--> We can see the following:
- blkdeactivate runs, unmounting /data, even though my.service is still running (hence the unexpected message "/data is not a mountpoint")

Expected results:

- my.service gets stopped
- then "data.mount" gets stopped
- finally blkdeactivate runs

Additional info:

I understand there is some chicken-and-egg problem here, but it's just not possible to blindly unmount file systems and ignore expected unit ordering.