Just wanted to confirm that this happens with mirrored zpools as well (both data volumes as well as the cache volume are shown in Nautilus).
~$ lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT sdb ├─sdb9 ├─sdb3 vfat EFI D6C0-5426 /boot/efi └─sdb1 zfs_member rpool 8635565870410578780 zd0 [SWAP] zd48 ├─zd48p1 └─zd48p2 zd16 sdc zd32 ├─zd32p1 └─zd32p2 sda ├─sda9 ├─sda3 vfat EFI D73A-786C └─sda1 zfs_member rpool 8635565870410578780 nvme0n1 zfs_member
~$ sudo zpool status -v pool: rpool state: ONLINE scan: scrub repaired 0 in 10h46m with 0 errors on Sun Mar 12 12:10:34 2017 config:
NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 wwn-0x500003976bd0292c-part1 ONLINE 0 0 0 wwn-0x500003976bd0292f-part1 ONLINE 0 0 0 cache nvme0n1 ONLINE 0 0 0
errors: No known data errors
Just wanted to confirm that this happens with mirrored zpools as well (both data volumes as well as the cache volume are shown in Nautilus).
~$ lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sdb
├─sdb9
├─sdb3 vfat EFI D6C0-5426 /boot/efi
└─sdb1 zfs_member rpool 8635565870410578780
zd0 [SWAP]
zd48
├─zd48p1
└─zd48p2
zd16
sdc
zd32
├─zd32p1
└─zd32p2
sda
├─sda9
├─sda3 vfat EFI D73A-786C
└─sda1 zfs_member rpool 8635565870410578780
nvme0n1 zfs_member
~$ sudo zpool status -v
pool: rpool
state: ONLINE
scan: scrub repaired 0 in 10h46m with 0 errors on Sun Mar 12 12:10:34 2017
config:
NAME STATE READ WRITE CKSUM 0x500003976bd02 92c-part1 ONLINE 0 0 0 0x500003976bd02 92f-part1 ONLINE 0 0 0
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-
wwn-
cache
nvme0n1 ONLINE 0 0 0
errors: No known data errors