Can only run hello snap as root

Bug #1686852 reported by Karan Bavishi
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Snappy
New
Undecided
Unassigned

Bug Description

I am unable to run the "hello" snap as the normal user. These are the steps I followed:

$ sudo snap install hello
hello 2.10 from 'canonical' installed
$ snap run hello
cannot create user data directory: /users/kbavishi/snap/hello/20: Read-only file system

I checked the permissions of the directory that it complained about, and they look fine to me:
$ ls -al /users/kbavishi/snap/hello/20
total 8
drwxr-xr-x 2 kbavishi netopt-PG0 4096 Apr 27 15:53 .
drwxr-xr-x 4 kbavishi netopt-PG0 4096 Apr 27 15:53 ..

If I run the snap as root, it works.
$ sudo snap run hello
Hello, world!

Any idea what I'm doing wrong?

I browsed through the snap code, and this is probably the line which throws the error: https://github.com/snapcore/snapd/blob/2467903b927afe9eabd18ecde6237a90da840295/cmd/snap-confine/user-support.c#L40

I'm running Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-45-generic x86_64)

Revision history for this message
Mark Shuttleworth (sabdfl) wrote : Re: [Bug 1686852] [NEW] Can only run hello snap as root

Thanks Karan

The path /users/ is unusual, mostly I see /home/ in these case. I wonder
if that's the root cause, or related? How is /users/ being mounted?

Mark

Revision history for this message
Zygmunt Krynicki (zyga) wrote :

Mark is right, snapd is very opinionated about where user directories can exist. Can you tell us more about your setup? Why is the user directory in /users/ and not in /home? Are you using NFS?

Changed in snappy:
status: New → Incomplete
Revision history for this message
Karan Bavishi (kbavishi) wrote :

Thanks for the comments guys.

This is a university lab machine I am using for testing. It has its own custom setup to ensure easy experiments across different runs, and hence the weird "/users/" setup.

I don't think it's using NFS. Here's the output of "df" and "mount". There seems to be a regular disk /dev/sda1 mounted at "/".

$ df -H
Filesystem Size Used Avail Use% Mounted on
udev 68G 0 68G 0% /dev
tmpfs 14G 11M 14G 1% /run
/dev/sda1 17G 3.8G 13G 24% /
tmpfs 68G 0 68G 0% /dev/shm
tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs 68G 0 68G 0% /sys/fs/cgroup
ops.wisc.cloudlab.us:/share 54G 2.1G 52G 4% /share
ops.wisc.cloudlab.us:/proj/netopt-PG0 108G 83G 26G 77% /proj/netopt-PG0
tmpfs 14G 0 14G 0% /run/user/20001
/dev/loop0 83M 83M 0 100% /snap/core/1577
/dev/loop1 132k 132k 0 100% /snap/hello/20

$ mount -l | grep sda
/dev/sda1 on / type ext3 (rw,relatime,errors=remount-ro,data=ordered) [/]
/dev/sda1 on /var/lib/docker/aufs type ext3 (rw,relatime,errors=remount-ro,data=ordered) [/]

Revision history for this message
John Lenton (chipaca) wrote :

I don't know if it's relevant to this or not (as I don't know if /users is connected to, say, /share somehow), but both /share and /proj/netopt-PG0 look like NFS mounts to me.

Revision history for this message
Karan Bavishi (kbavishi) wrote :

That's true, "/share" and "/proj/netopt-PG0" are NFS mounts. But "/" itself isn't. It can be seen from the output of mount below:

$ mount | grep nfs
ops.wisc.cloudlab.us:/proj/netopt-PG0 on /proj/netopt-PG0 type nfs (rw, ..., addr=128.104.222.8)
ops.wisc.cloudlab.us:/share on /share type nfs (rw, ..., addr=128.104.222.8)

Looking at the source code of sc_nonfatal_mkpath() which apparently threw an error, it seems like we are trying to walk the directory "/users/kbavishi/snap/hello/20" one directory at a time using mkdirat() and openat(). I looked at the permissions for all the directories along this path and they seem OK to me (see outputs below).

Any idea on how to debug this more? I wanted to run strace but unfortunately that requires sudo privileges and the hello snap error disappears.

$ ls / -al
total 122
drwxr-xr-x 28 root root 4096 Apr 28 08:03 .
drwxr-xr-x 28 root root 4096 Apr 28 08:03 ..
[...] snipped

$ ls /users/ -al
total 28
drwxr-xr-x 7 root root 4096 Apr 28 06:10 .
drwxr-xr-x 28 root root 4096 Apr 28 08:03 ..
drwxr-xr-x 3 geniuser netopt-PG0 4096 Apr 28 06:10 geniuser
drwxr-xr-x 11 kbavishi netopt-PG0 4096 Apr 28 08:35 kbavishi
drwxr-xr-x 3 kshiteej netopt-PG0 4096 Apr 28 06:10 kshiteej
drwxr-xr-x 3 raajay86 netopt-PG0 4096 Apr 28 06:10 raajay86
drwxr-xr-x 3 tranlam netopt-PG0 4096 Apr 28 06:10 tranlam

$ ls /users/kbavishi/ -al
total 148
drwxr-xr-x 11 kbavishi netopt-PG0 4096 Apr 28 08:35 .
drwxr-xr-x 7 root root 4096 Apr 28 06:10 ..
drwxr-xr-x 3 kbavishi netopt-PG0 4096 Apr 28 06:15 snap

$ ls /users/kbavishi/snap -al
total 12
drwxr-xr-x 3 kbavishi netopt-PG0 4096 Apr 28 06:15 .
drwxr-xr-x 11 kbavishi netopt-PG0 4096 Apr 28 08:35 ..
drwxr-xr-x 4 kbavishi netopt-PG0 4096 Apr 28 06:15 hello

Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for Snappy because there has been no activity for 60 days.]

Changed in snappy:
status: Incomplete → Expired
Revision history for this message
Adam Conrad (adconrad) wrote :

Expiring a bug due to lack of response from the developers is probably poor form. :P

Changed in snappy:
status: Expired → New
Revision history for this message
Tobias McQuire (ludden) wrote :

Same issue here.
We are using a NFS for the home directories. Anyway to configure snappy to work correctly?

Revision history for this message
David Banks (amoe) wrote :

Here's a log generated with the centos snap tool.

[amoe@localhost ~]$ SNAPD_DEBUG=1 /var/lib/snapd/snap/bin/lxc --help
2018/07/19 12:16:23.482666 cmd.go:70: DEBUG: re-exec not supported on distro "centos" yet
DEBUG: security tag: snap.lxd.lxc
DEBUG: executable: /usr/lib/snapd/snap-exec
DEBUG: confinement: non-classic
DEBUG: base snap: core
DEBUG: ruid: 1000, euid: 0, suid: 0
DEBUG: rgid: 1000, egid: 0, sgid: 0
DEBUG: checking if the current process shares mount namespace with the init process
DEBUG: re-associating is not required
DEBUG: creating lock directory /run/snapd/lock (if missing)
DEBUG: opening lock directory /run/snapd/lock
DEBUG: opening lock file: /run/snapd/lock/.lock
DEBUG: sanity timeout initialized and set for three seconds
DEBUG: acquiring exclusive lock (scope (global))
DEBUG: sanity timeout reset and disabled
DEBUG: ensuring that snap mount directory is shared
DEBUG: unsharing snap namespace directory
DEBUG: creating namespace group directory /run/snapd/ns
DEBUG: namespace group directory does not require intialization
DEBUG: releasing lock (scope: (global))
DEBUG: creating lock directory /run/snapd/lock (if missing)
DEBUG: opening lock directory /run/snapd/lock
DEBUG: opening lock file: /run/snapd/lock/lxd.lock
DEBUG: sanity timeout initialized and set for three seconds
DEBUG: acquiring exclusive lock (scope lxd)
DEBUG: sanity timeout reset and disabled
DEBUG: initializing mount namespace: lxd
DEBUG: opening namespace group directory /run/snapd/ns
DEBUG: found base snap filesystem device 7:0
DEBUG: attempting to re-associate the mount namespace with the namespace group lxd
DEBUG: successfully re-associated the mount namespace with the namespace group lxd
DEBUG: found root filesystem inside the mount namespace 7:0
DEBUG: cannot remain in /local/amoe, moving to the void directory
DEBUG: successfully moved to /var/lib/snapd/void
DEBUG: releasing resources associated with namespace group lxd
DEBUG: moved process 7891 to freezer cgroup hierarchy for snap lxd
DEBUG: releasing lock (scope: lxd)
DEBUG: resetting PATH to values in sync with core snap
DEBUG: snappy_udev_init
DEBUG: creating user data directory: /local/amoe/snap/lxd/7412
cannot create user data directory: /local/amoe/snap/lxd/7412: Read-only file system

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.