refreshing a pool destroys all volume info in libvirt zfs backend

Bug #1767997 reported by Brian Candler
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
libvirt (Ubuntu)
Confirmed
Low
Unassigned

Bug Description

How to replicate:

1. Install libvirt, libvirt-daemon-driver-storage-zfs and virt-manager

2. Create a zfs pool

# zpool create -oashift=12 zfs /dev/sdb
# zfs set compression=lz4 zfs
# zfs create zfs/images
# virsh pool-define-as --name zfs --source-name zfs/images --type zfs

3. Create a volume in that pool

root@beaver:~# virsh vol-create-as --pool zfs --name vol1 --capacity 1G
Vol vol1 created

root@beaver:~# virsh vol-list --pool zfs
 Name Path
------------------------------------------------------------------------------
 vol1 /dev/zvol/zfs/images/vol1

4. Start virt-manager (even as a non-root user)

5. Immediately re-list the volumes from virsh:

root@beaver:~# virsh vol-list --pool zfs
 Name Path
------------------------------------------------------------------------------

root@beaver:~#

All zfs volumes have been forgotten!! Fortunately they still exist in zfs:

root@beaver:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zfs 1.03G 8.59G 96K /zfs
zfs/images 1.03G 8.59G 96K /zfs/images
zfs/images/vol1 1.03G 9.63G 56K -

I think virt-manager is supposed to play nicely with zfs:

* You can create a zfs pool successfully in the GUI
* You can click to create a zfs volume. It actually creates the underlying zvol (as shown by "zfs list")
* But every time you do any action in virt-manager, even just clicking the Refresh button on the list of volumes, all volumes disappear from libvirt.

root@beaver:~# virsh vol-list --pool zfs
 Name Path
------------------------------------------------------------------------------
 vol1 /dev/zvol/zfs/images/vol1

root@beaver:~#

<<< click the Refresh button >>>

root@beaver:~# virsh vol-list --pool zfs
 Name Path
------------------------------------------------------------------------------

root@beaver:~#

ProblemType: Bug
DistroRelease: Ubuntu 18.04
Package: virt-manager 1:1.5.1-0ubuntu1
ProcVersionSignature: Ubuntu 4.15.0-20.21-generic 4.15.17
Uname: Linux 4.15.0-20-generic x86_64
NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
ApportVersion: 2.20.9-0ubuntu7
Architecture: amd64
Date: Mon Apr 30 10:24:19 2018
PackageArchitecture: all
ProcEnviron:
 TERM=xterm-256color
 PATH=(custom, no user)
 XDG_RUNTIME_DIR=<set>
 LANG=en_US.UTF-8
 SHELL=/bin/bash
SourcePackage: virt-manager
UpgradeStatus: No upgrade log present (probably fresh install)

Revision history for this message
Brian Candler (b-candler) wrote :
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Hi (again) Brian,
this is rather unexpected let me try to follow what happened.
So far you only triggered this via virt-manager start/refresh so I'll follow that until we know a non virt-manager trigger - usually it only calls libvirt tasks so I'd assume we can identify a virsh <something> or so to trigger the same.

Fortunately it seems it "only" seems to forget in libvirt about it quoting you on "All zfs volumes have been forgotten!! Fortunately they still exist in zfs".

I should be able to start testing this in an hour or so I hope ...

Changed in virt-manager (Ubuntu):
importance: Undecided → High
Revision history for this message
Brian Candler (b-candler) wrote :

Aha, good thinking about reproducing with virsh. To find out what virt-manager is doing, I did:

LIBVIRT_DEBUG=1 virt-manager --no-fork

Clicking the refresh button shows me it's doing "virStoragePoolRefresh", which takes me to
ftp://libvirt.org/libvirt/virshcmdref/html/sect-pool-refresh.html

And yes, I can reproduce using just virsh pool-refresh:

root@beaver:~# virsh vol-create-as --pool zfs --name vol1 --capacity 1G
Vol vol1 created

root@beaver:~# virsh vol-list --pool zfs
 Name Path
------------------------------------------------------------------------------
 vol1 /dev/zvol/zfs/images/vol1

root@beaver:~# virsh pool-refresh zfs
Pool zfs refreshed

root@beaver:~# virsh vol-list --pool zfs
 Name Path
------------------------------------------------------------------------------

root@beaver:~#

Revision history for this message
Brian Candler (b-candler) wrote :

And using strace on libvirtd, I see it's running this:

[pid 5806] execve("/sbin/zpool", ["/sbin/zpool", "get", "-Hp", "health,size,free,allocated", "zfs/images"], 0x7fffedc8b6c8 /* 6 vars */) = 0

Which is not a valid command:

root@beaver:~# /sbin/zpool get -Hp health,size,free,allocated zfs/images
cannot open 'zfs/images': invalid character '/' in pool name

Right. It seems that this part of libvirt thinks that it has full reign over a zfs *pool*, whereas everything else is happy to create child datasets of an existing *dataset*

Furthermore: when the zpool command fails, libvirt is still happy to zap all the existing defined libvirt volumes (or it has already done so).

Revision history for this message
Brian Candler (b-candler) wrote :

I looked in both 4.0.0 and HEAD source, and the call to zpool uses def->source.name. I think it would work fine if it stripped out everything from the first slash.

(Aside: if the zpool command fails, virStorageBackendZFSRefreshPool() just goes straight to cleanup and returns zero, as if nothing bad had happened)

It then goes on to call zfs list, which would do the following:

root@beaver:~# zfs list -Hp -t volume -r -o name,volsize,refreservation zfs/images
zfs/images/vol1 1073741824 1109393408

That looks perfectly OK to me.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Repro without disks or other HW dependency:
 $ apt install zfsutils-linux
 $ fallocate -l 100M /tmp/zfs
 $ sudo zpool create zfs /tmp/zfs
 $ sudo zfs create -V 10M zfs/vol1
You can use that as a disk just fine:
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/zvol/zfsmirrortest/vol1'/>
      <target dev='vdc' bus='virtio'/>
    </disk>

But using disks "through" libvirt pools had a known issue - see bug 1677398
This is a known issue, but implementation is rather complex and benefit low.
Those that used it so far added custom rules to allow access as needed.
As outlined there (until implemented) you'd usually have the option to either use it without the pool feature or to allow certain paths for your system in the virt related apparmor rules.
That said the TL;DR zfs !pools! never worked so far unless you manually configured the above.
Therefore I'd not expect zfs-pools to work now as-is.

I trying to get to the "volumes seen but then lost" that you described still.
 $ fallocate -l 100M /tmp/Xzfs
 $ sudo zpool create Xzfs /tmp/Xzfs
 $ sudo zfs create Xzfs/images
 $ virsh pool-define-as --name Xzfs --source-name Xzfs/images --type zfs
I ended with the pool not being started (expected after pool-define-as), but then
 $ virsh pool-start Xzfs
 $ virsh vol-create-as --pool Xzfs --name vol1 --capacity 10M
 $ virsh vol-list --pool Xzfs
  Name Path
  vol1 /dev/zvol/Xzfs/images/vol1

From here I:
1. installed virt-manager - still ok
2. started virt manager - breaking it

So far I can at least "confirm" your issue already.
I found no apparmor issue (the known issue comes later when the guests start not from libvirt).
But I found in the libvirt log this:
 error : virCommandWait:2601 : internal error: Child process (/sbin/zpool get -Hp health,size,free,allocated Xzfs/images) unexpected exit status 1: cannot open 'Xzfs/images': invalid character '/' in pool name

I assume this is what virt-manager triggers in libvirt every time, but I don't know the entry point yet. Need to search for it once I'm back (one day out tmrw).

I know you had some of your setup before upgrade, did you have tweaked it to work with zfs pools before? If that was the case I'd be much more concerned.

in general (even thou I think pools won't work) I'd not want virt-manager to clear the pool view, in fact I'd expect libvirt to keep working and just the start of the qemu to fail (due to the apparmor denial).

Changed in virt-manager (Ubuntu):
status: New → Confirmed
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Haven't read all your updates in between while I was writing :-)
Need to re-read all you posted - thanks for your work on this.

You might answer the questions around if that worked for you before or not while I'm doing so ...

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Read - I see we both got to the bad zpool call for now.
I'm in a hurry and need to drop for now (sorry), but will look at the source and maybe if there are later fixes post 4.0 - if not I'll go deeper and try to create a test version.

Thanks for finding "virsh pool-refresh zfs" for this case.

affects: virt-manager (Ubuntu) → libvirt (Ubuntu)
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Flagging as libvirt issue as we ruled out virt-manager

Revision history for this message
Brian Candler (b-candler) wrote :

I checked libvirt HEAD and the code's the same there: src/storage/storage_backend_zfs.c

    cmd = virCommandNewArgList(ZPOOL,
                               "get", "-Hp",
                               "health,size,free,allocated",
                               def->source.name,
                               NULL);

I guess this needs to go upstream.

Revision history for this message
Brian Candler (b-candler) wrote :

> if that worked for you before or not

Did this work in ubuntu 16.04 you mean? No it didn't; I was able to create the the libvirt zfs pool via virsh, and virt-manager would show that the pool existed, but not any volumes within it.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Ok, without apparmor fixes (unlikely soon for reasons outlined in other bug) or you weakening the isolation a bit on your system it won't work then.

So lets focus on the clearing of this volume list in this bug - which should not happen.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

I was collecting a summary to report upstream, but then I wondered are there other scenarios where this should not be stripped to the base?
Or is there another common use case were the pool is created differently so it works?

I followed some other guides and ended up with a non-external created pool:

<pool type="zfs">
  <name>myzfspool</name>
  <source>
    <name>zpoolname</name>
    <device path="/tmp/Nzfs1"/>
    <device path="/tmp/Nzfs2"/>
  </source>
</pool>

$ virsh pool-create --build Nzfs.xml

Now I have a pool (with the odd name zpoolname)
$ sudo zpool list; sudo zfs list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
Xzfs 80M 146K 79,9M - 1% 0% 1.00x ONLINE -
Yzfs 80M 117K 79,9M - 1% 0% 1.00x ONLINE -
zfs 80M 130K 79,9M - 1% 0% 1.00x ONLINE -
zfs2 80M 94,5K 79,9M - 1% 0% 1.00x ONLINE -
zpoolname 160M 646K 159M - 1% 0% 1.00x ONLINE -
NAME USED AVAIL REFER MOUNTPOINT
Xzfs 12,6M 27,4M 24K /Xzfs
Xzfs/images 12,5M 27,4M 24K /Xzfs/images
Xzfs/images/vol1 12,5M 39,9M 12K -
Yzfs 108K 39,9M 24K /Yzfs
Yzfs/images 24K 39,9M 24K /Yzfs/images
zfs 35,3M 4,67M 24K /zfs
zfs/vol1 12,5M 17,2M 12K -
zfs/vol2 22,8M 27,4M 12K -
zfs2 78K 39,9M 24K /zfs2
zpoolname 84K 79,9M 24K /zpoolname

From this libvirt can create volumes.

$ virsh vol-create-as --pool myzfspool --name vol1 --capacity 10M
Gets me in zfs tools:
...
zpoolname/vol1 12,5M 79,9M 12K -

And with that libvirt can refresh just fine:
$ virsh vol-list --pool myzfspool
 Name Path
 vol1 /dev/zvol/zpoolname/vol1
$ virsh pool-refresh myzfspool
Pool myzfspool refreshed
$ virsh vol-list --pool myzfspool
 Name Path
 vol1 /dev/zvol/zpoolname/vol1

So I wonder is this "just" a conflict between how libvirt expects pools to be set up (and as it does by itself) vs the manual set up one?

After I learned the above I tried this:
$ fallocate -l 100M /tmp/Mzfs
$ sudo zpool create Mzfs /tmp/Mzfs
$ virsh pool-define-as --name zfs --source-name Mzfs --type zfs
$ virsh pool-start zfs
$ virsh vol-create-as --pool zfs --name vol1 --capacity 10M
$ virsh vol-list --pool zfs
 Name Path
 vol1 /dev/zvol/Mzfs/vol1
$ virsh pool-refresh zfs
Pool zfs refreshed
$ virsh vol-list --pool zfs
 Name Path
 vol1 /dev/zvol/Mzfs/vol1

This confirms that if you skip the "zfs create zfs/images" step and define the libvirt pool from the zpool directly then all things seem to work.

I'd appreciate a report upstream still, but I think the issue is no more that severe atm.

Changed in libvirt (Ubuntu):
importance: High → Low
Revision history for this message
Brian Candler (b-candler) wrote :

> So I wonder is this "just" a conflict between how libvirt expects pools to be set up (and as it does by itself) vs the manual set up one?

(1) If libvirt is only supposed to work with a top-level pool, then it should have refused to allow me to create a libvirt pool with a slash in the zfs pool name.

But that's not a great solution. Given that everything apart from zpool usage stats works fine, it would be much better to allow use of a parent dataset, which can be done by fixing the zpool command invocation.

This is much more flexible: it allows the zpool to be shared with other applications, and have separate quotas and usage reporting for libvirt versus those other applications. I definitely wouldn't want to dedicate an entire zpool to libvirt.

(2) In any case, if an error occurs when libvirt is refreshing a storage pool and shelling out to zfs subcommands, the failure should be reported and propagated. Ideally it wouldn't discard the previous volume info either.

Any system error which can happen, will sooner or later happen :-)

summary: - virt-manager destroys all volumes in libvirt zfs pool
+ refreshign a pool destroys all volume info in libvirt zfs backend
summary: - refreshign a pool destroys all volume info in libvirt zfs backend
+ refreshing a pool destroys all volume info in libvirt zfs backend
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Opened for a discussion upstream at: https://www.redhat.com/archives/libvir-list/2018-May/msg00043.html
I didn't have your email to CC you, but you can track via the WEB UI of the ML.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Thanks for chiming in on the upstream discussion btw!

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.