mountpoint remains in use after restore snapshot

Bug #1686036 reported by Anatoliy
10
This bug affects 1 person
Affects Status Importance Assigned to Milestone
lxd (Ubuntu)
Fix Released
High
Christian Brauner
Xenial
Fix Released
Undecided
Unassigned
Zesty
Fix Released
Undecided
Unassigned
Artful
Fix Released
High
Christian Brauner

Bug Description

uname -a
Linux lxd2-chel1 4.4.0-72-generic #93-Ubuntu SMP Fri Mar 31 14:07:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

lxd: 2.12-0ubuntu3~ubuntu16.04.1~ppa1
zfsutils-linux: 0.6.5.6-0ubuntu16

# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.2 LTS
Release: 16.04
Codename: xenial

after restore container from shapshot cannot add new snapshot or restore again
until restart container

example:

# lxc image list
+---------------+--------------+--------+--------------------------------------+--------+---------+------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+---------------+--------------+--------+--------------------------------------+--------+---------+------------------------------+
| debian/jessie | ba43812c4cb9 | no | Debian jessie amd64 (20170423_22:42) | x86_64 | 94.14MB | Apr 24, 2017 at 9:07am (UTC) |
+---------------+--------------+--------+--------------------------------------+--------+---------+------------------------------+

# lxc launch debian/jessie
Creating popular-kitten

The container you are starting doesn't have any network attached to it.
  To create a new network, use: lxc network create
  To attach a network to a container, use: lxc network attach

Starting popular-kitten

# lxc info popular-kitten
Name: popular-kitten
Remote: unix:/var/lib/lxd/unix.socket
Architecture: x86_64
Created: 2017/04/25 07:17 UTC
Status: Running
Type: persistent
Profiles: default
Pid: 6965
Ips:
  lo: inet 127.0.0.1
  lo: inet6 ::1
Resources:
  Processes: 7
  Disk usage:
    root: 1.48MB
  CPU usage:
    CPU usage (in seconds): 25
  Memory usage:
    Memory (current): 16.22MB
    Memory (peak): 23.01MB
  Network usage:
    lo:
      Bytes received: 0B
      Bytes sent: 0B
      Packets received: 0
      Packets sent: 0

# lxc profile show default
config: {}
description: Default LXD profile
devices:
  root:
    path: /
    pool: main-pool
    type: disk
name: default
used_by:
- /1.0/containers/popular-kitten
# lxc snapshot popular-kitten

# zfs get mounted main-pool/containers/popular-kitten
NAME PROPERTY VALUE SOURCE
main-pool/containers/popular-kitten mounted yes -

# zfs get mounted main-pool/snapshots/popular-kitten
NAME PROPERTY VALUE SOURCE
main-pool/snapshots/popular-kitten mounted yes -

# lxc restore popular-kitten snap0

# zfs get mounted main-pool/snapshots/popular-kitten
NAME PROPERTY VALUE SOURCE
main-pool/snapshots/popular-kitten mounted yes -

# zfs get mounted main-pool/containers/popular-kitten
NAME PROPERTY VALUE SOURCE
main-pool/containers/popular-kitten mounted no -

# lxc snapshot popular-kitten
error: Failed to mount ZFS filesystem: filesystem 'main-pool/containers/popular-kitten' is already mounted
cannot mount 'main-pool/containers/popular-kitten': mountpoint or dataset is busy

# lxc restore popular-kitten snap0
error: Failed to mount ZFS filesystem: filesystem 'main-pool/containers/popular-kitten' is already mounted
cannot mount 'main-pool/containers/popular-kitten': mountpoint or dataset is busy

but container still work:

# lxc info popular-kitten
Name: popular-kitten
Remote: unix:/var/lib/lxd/unix.socket
Architecture: x86_64
Created: 2017/04/25 07:17 UTC
Status: Running
...

# lxc exec popular-kitten bash
root@popular-kitten:~# uptime
 07:34:06 up 8 min, 0 users, load average: 0.00, 0.02, 0.03

after restart container:

# lxc restart popular-kitten

# zfs get mounted main-pool/containers/popular-kitten
NAME PROPERTY VALUE SOURCE
main-pool/containers/popular-kitten mounted yes -

on another server this problem missmatch:

# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.10
Release: 16.10
Codename: yakkety

# lxd --version
2.4.1

zfsutils-linux: 0.6.5.8-0ubuntu4.1

Tags: lxd snapshot
Revision history for this message
Christian Brauner (cbrauner) wrote :

This is very likely not a LXD bug. I suspect this is https://github.com/zfsonlinux/zfs/issues/5796 again which I reported to ZFS upstream. I'll ping them about this again tomorrow and if I don't hear back will take a look at this myself.

Revision history for this message
Christian Brauner (cbrauner) wrote :

Reproducible. Can you please open this bug on github.

Revision history for this message
Anatoliy (sayto) wrote :
Revision history for this message
Anatoliy (sayto) wrote :
Changed in lxc (Ubuntu):
status: New → In Progress
assignee: nobody → Christian Brauner (cbrauner)
Changed in lxc (Ubuntu):
importance: Undecided → High
Changed in lxc (Ubuntu):
status: In Progress → Fix Committed
Revision history for this message
Anatoliy (sayto) wrote :

upgrade lxd to 2.13
but bug still exist =(

Revision history for this message
Christian Brauner (cbrauner) wrote :
summary: - strange behavior after restore snapshot
+ mountpoint remains in use after restore snapshot
no longer affects: lxc (Ubuntu)
Changed in lxd (Ubuntu):
status: New → Fix Committed
importance: Undecided → High
assignee: nobody → Christian Brauner (cbrauner)
Revision history for this message
Stéphane Graber (stgraber) wrote :

xenial, yakkety and zesty will get the fix when we backport LXD 2.14.

Is LXD 2.0.x also affected? If so, I'm not sure we have a fix for it in 2.0.10, so we'd need a fix in stable-2.0 upstream so that 2.0.11 can have it.

Changed in lxd (Ubuntu Artful):
status: Fix Committed → Fix Released
Changed in lxd (Ubuntu Zesty):
status: New → Triaged
Changed in lxd (Ubuntu Yakkety):
status: New → Triaged
Changed in lxd (Ubuntu Xenial):
status: New → Triaged
Changed in lxd (Ubuntu Xenial):
status: Triaged → Fix Released
no longer affects: lxd (Ubuntu Yakkety)
Changed in lxd (Ubuntu Zesty):
status: Triaged → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.