Comment 6 for bug 1571761

Revision history for this message
Richard Laager (rlaager) wrote :

[quoted blocks trimmed and reordered for ease of reply]

> I'm curious, how are these actually being used? They can't be relevant
> for the root file system as that already needs to be set up in the
> initrd.

Correct. The root pool (with all of its filesystems) is handled in the initrd.
These units are for regular (non-root) pools.

> This unit does not sort itself before local-fs-pre.target
> that, i. e. you also can't rely on having these done when stuff in
> /etc/fstab is being mounted. The only thing that I see is that
> zfs-mount.service.in is Before=local-fs.target, i. e the zfs-import-*
> bits will run in parallel with mounting fstab.

In parallel with mounting fstab sounds reasonable, since mounting filesystems
from fstab and mounting filesystems from ZFS (which happens by default upon
import) are analogous.

> Does this mean that ZFS isn't currently supported on hotpluggable
> storage? (As there's nothing that would call zpool import on them)

In short, yes. If (after boot) you plug in a USB drive with a zpool on it, you
would need to manually import it. Personally, I'd like to see this improved in
the future.

> Is zpool import idempotent in any way,
> i. e. would it be safe (and also performant) to run it whenever a
> block device is seen, instead of once ever in the boot sequence?

I'm not super confident, but I think it would be idempotent, at least assuming
it is not run in parallel. But I doubt it would be performant. Running
zpool import examines all devices (all zfs_member devices if compiled with
libblkid support, assuming I'm understand the code correctly).

Additionally, if you're responding to device events one-by-one, for pools with
more than one disk, there's a question of whether you should import the pool
when disks are missing (within the level of redundancy where you *can* import
the pool, of course).

> LVM is a bit more complicated,
> its udev rules mostly just tell a daemon (lvmetad) about new devices
> which then decides when it has enough pieces (PVs) to bring up a VG.

This might be the right approach for ZFS long-term. I think the best behavior
is to import a pool:
  once all devices are present ||
  (once sufficient devices are present &&
   a short timeout has expired with no new disks arriving)