please have lxd recommend zfs

Bug #1624540 reported by Dustin Kirkland  on 2016-09-16
This bug affects 2 people
Affects Status Importance Assigned to Milestone
lxd (Ubuntu)
zfs-linux (Ubuntu)
Colin Ian King

Bug Description

Since ZFS is now in Main (Bug #1532198), LXD should recommend the ZFS userspace package, such that 'sudo lxd init' just works.

Changed in lxd (Ubuntu):
status: New → Triaged
importance: Undecided → Medium
assignee: nobody → Stéphane Graber (stgraber)
importance: Medium → High
Stéphane Graber (stgraber) wrote :

Marking as incomplete since we can't recommend zfsutils-linux so long as it depends on python2.7.

Changed in lxd (Ubuntu):
status: Triaged → Incomplete
assignee: Stéphane Graber (stgraber) → nobody

Gack. Is there a newer zfs-utils that is python3?


Stéphane Graber (stgraber) wrote :

So I've confirmed that the python3 side of this has been resolved. zfsutils-linux as it is in yakkety right now has now been changed over to python3 by Colin.

Though, as discussed by e-mail when Dustin first brought this up, there is a second issue I think we should have resolved before we start including the zfs tools by default on all Ubuntu 64bit server installations.

That's the fact that zfs brings with it a daemon (zfs-zed) which will start on all systems, regardless of their use of zfs. I'm not against us running zed as it does seem to provide some value in background monitoring of zfs pools, but I don't think we want all Ubuntu servers to run that daemon when they're not using zfs themselves.

In containers, zed fails to start, but systemd attempts to respawn it for about 10s, leading to a whole bunch of log messages, a unit startup failure and quite a waste of CPU time.

I believe the zed systemd unit should at the very least be modified not to start inside containers and should then be further modified to only start on systems which have at least one zpool defined.

Subscribing Colin to this bug.

Richard Laager (rlaager) wrote :

It is important that zed run in all cases where a pool (with real disks) exists. This will only get more important over time, as the fault management code is improved. (Intel is actively working on this.)

It seems reasonable to not start zed in a container, though.

For the second piece, only running zed if a pool exists... Did you have a specific implementation in mind?

Long-term, there are some questions about whether zed should be in charge of importing pools. If zed were to be the thing that imports pools, that creates a problem if you don't want zed to run all the time. I don't think I came up with the idea of zed orchestrating pool imports, but I discussed it here:

There's also this work-in-progress pull request upstream that may be a better approach:

Long-term, if upstream was to accept something like that pull request, and make its zpool import unit template Wants=zed.service, would that accomplish all the goals?

I'm not saying we have to implement the whole thing now. I just want to make sure the fixes now are compatible with a long-term plan. I also want to make sure that upstream's long-term plans are compatible with Ubuntu's needs.

Changed in zfs-linux (Ubuntu):
status: New → Confirmed
Mark Shuttleworth (sabdfl) wrote :

I think it would be acceptable, for now, if (a) the zed init script
avoided starting in containers, and (b) there were an /etc/default/zed
which enabled one to disable zed altogether. Would that cause a problem
for people creating their first pool, where their docs expect zed to be


Colin Ian King (colin-king) wrote :

It would be unfortunate to not have zed running once people start creating pools just because zed provides quality feedback on faults, and as rlaager pointed out, the fault management intelligence in zed will only improve over time and so catching issues early with zed is part of the story that makes zfs a superior storage solution.

Mark Shuttleworth (sabdfl) wrote :

Right, the only issue is how to address the legitimate concerns of
people not using ZFS. Ideally, we'd be:

* detecting the need for it on boot and starting it if relevant
* also starting it whenever a zpool is created

This way, it's there whenever you need it, otherwise not. I'm pretty
sure the first part is easy, not sure about the latter. If systemd
generates events on module loading, that might do the trick.


Richard Laager (rlaager) wrote :

The ZFS module is loaded automatically by zpool-import-scan/zpool-import-cache.

Stéphane Graber (stgraber) wrote :

Right, so unfortunately we can't base it on whether the zfs module is loaded, as it will effectively always be loaded as soon as we pre-install zfsutils-linux in our images.

Now what we could do I guess is:
 - Don't start ANY of the 3 zfs systemd units in containers (that should be pretty trivial)
 - Have zfs-zed start after zfs-import-scan and zfs-mount and have it bail if there is no zfs filesystems mounted

That still leaves us without zed running after zpool creation. The only way I can think of to fix this would be to alter the zpool command itself. Though I'm not sure how big a deal it would really be given that zed would then start after next reboot and the vast majority of users will reboot their system after changing their storage configuration, just to make sure that things get mounted properly.

So the window where zed wouldn't be running would be relatively small.

In an ideal world, we'd have something like RequiresMountsFor in systemd but taking a filesystem type rather than a path, so we could have a unit list "RequiresFilesystem" with "zfs" and so have zfs-zed kick in as soon as any zfs mount occurs, but I'm not seeing any way to do this with current systemd. (subscribing martin for ideas)

Martin Pitt (pitti) wrote :

> I believe the zed systemd unit should at the very least be modified not to start inside containers

That can be done with ConditionVirtualization=!container

> (b) there were an /etc/default/zed which enabled one to disable zed altogether.

Please don't do that. /etc/default files should never have been mis-used for enabling/disabling services -- there already is update-rc.d enable/disable for that (or the more direct systemctl enable/disable now).

> we could have a unit list "RequiresFilesystem" with "zfs" and so have zfs-zed kick in as soon as any zfs mount occurs

There is no direct way to express this; but IMHO a sufficient approximation is to start it if a zfs file system exists? That can be done with an udev rule like

  ACTION!="remove", SUBSYSTEM=="block", ENV{ID_FS_TYPE}=="zfs", ENV{SYSTEMD_WANTS}+="zed.service"

Note that I don't know the precise value of ID_FS_TYPE for zpools, and the precise name of the service you want to start.

Stéphane Graber (stgraber) wrote :

So the problem is that zfs does its own loop mount handling without using a loop device so without a block uevent...

Triggering on a uevent would only work for pools that are using physical devices, not for those using file as loops, which is unfortunately a rather common setup for LXD users who didn't plan ahead for ZFS.

Richard Laager (rlaager) wrote :

@pitti, the ID_FS_TYPE is zfs_member, not zfs. The service is, as you listed, zed.service. Modifying your rule then is:

ACTION!="remove", SUBSYSTEM=="block", ENV{ID_FS_TYPE}=="zfs_member", ENV{SYSTEMD_WANTS}+="zed.service"

However, if zed.service is going to exit if there is no pool imported, this udev rule may not help. The udev event is going to fire before the pool is imported.

@stgraber, running zed is less important for loopback devices. So between that and catching it on reboot, we might be "close enough".

Martin Pitt (pitti) wrote :

> The udev event is going to fire before the pool is imported.

So how does a pool get imported, what triggers that if it's not block devices appearing? Whatever does that import, couldn't that start zed.service then instead of the udev rule?

Richard Laager (rlaager) wrote :

The pools are imported by either zfs-import-scan.service or zfs-import-cache.service. (Which service runs depends on whether /etc/zfs/zpool.cache exists.) They both call `zpool import -a` plus some other arguments. In other words, `zpool import -a` is being run unconditionally, whether pools exist or not.

It would be possible to throw a wrapper script around `zpool import` and run that from those services instead. It could then start zed.service if and only if a pool exists. I'm not sure if this is the best solution, but at first glance, it seems like it would work.

Aron Xu (happyaron) wrote :

As a note here, previous version of zfs-linux in Ubuntu loads the zfs kernel modules for everyone who has the package installed unconditionally, and since zfs-linux/ (synced from Debian) they are only loaded when the system has at least one zpool configured (ConditionPathExists=/etc/zfs/zpool.cache), as expected by rlaager in comment #14.

Colin Ian King (colin-king) wrote :

This was fixed on the following version:

zfs-linux ( zesty; urgency=medium

  * Resynchronize with Debian, remianing changes:
    - Load zfs module unconditionally for zesty

 -- Aron Xu <email address hidden> Mon, 20 Mar 2017 11:24:41 +0800

Changed in zfs-linux (Ubuntu):
status: Confirmed → Fix Released
importance: Undecided → High
Stéphane Graber (stgraber) wrote :

Colin: This is not what this issue is about.

This issue is about getting the ZFS tools installed by default in server images, with the problem that doing so now would result in zfs-zed running all the time for everyone, regardless of whether they use ZFS or not.

What we want is:
 - Don't load the module by default (as it taints the kernel)
 - Use of ZFS tools should automatically load the kernel module
 - zfs-zed should only start when one or more zpool has been imported

With those done, we'll be able to ship the zfs tools in all Ubuntu installs without negatively affecting users.

Changed in zfs-linux (Ubuntu):
status: Fix Released → Triaged
Changed in zfs-linux (Ubuntu):
status: Triaged → In Progress
assignee: nobody → Colin Ian King (colin-king)
Stéphane Graber (stgraber) wrote :

Marking the LXD side of this fixed as we're now shipping as a snap by default and the snap contains zfs.

Changed in lxd (Ubuntu):
status: Incomplete → Fix Released
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers