Enable the installation of snaps in a classic chroot

Bug #1609903 reported by Dan Watkins on 2016-08-04
16
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Snappy
High
Unassigned

Bug Description

Cloud images are built using livecd-rootfs and then modified for specific clouds by using chroot to modify the contents of the image (without ever booting it).

In order to use snaps to deliver functionality required by clouds on first boot (generally speaking, the agents that they provide), we need some way of installing snaps within these chroots.

Currently, running `snap install ...` within a chroot fails because snapd is (of course) not running.

Tags: cpc Edit Tag help
Dan Watkins (daniel-thewatkins) wrote :

2016-08-04 17:46:33 slangasek we're going to have a 'snap fetch', I think?
2016-08-04 17:46:53 Odd_Bloke Yeah, I've heard about that in the context of caching a snap for later installation.
2016-08-04 17:47:33 slangasek Odd_Bloke: right. So I think that we may need to do a 'snap fetch', followed by a bit of manual fiddling to put the snap bits in the right place as part of the image build
2016-08-04 17:47:55 slangasek niemeyer: ^^ maybe you'd be able to comment on this problem (pre-installing snaps as part of a classic image build, which happens in a chroot with no running snapd service)

Steve Langasek (vorlon) on 2016-08-04
Changed in snappy:
importance: Undecided → High
Jamie Bennett (jamiebennett) wrote :

From 2.12 we have a mechanism in place that allows snaps to be installed on first boot using a seed.yaml file and a seed directory. Would this be enough?

Dan Watkins (daniel-thewatkins) wrote :

Provided this happened early enough in boot, this could work. As an example of why we need this to be early in boot, on Azure we can't get networking until walinuxagent has run for the first time.

As a side note, how much slower is installation of a seeded snap than boot-time mounting etc. of it? Clouds are very, very focused on boot time at the moment, so that's something we'll definitely need to be sensitive to.

Jamie Bennett (jamiebennett) wrote :

The seed functionality has just landed in snappy so we will test the impact of installing seeded packages at boot time for reference. Maybe it will be fast enough for your use case but until we have the data it is hard to say.

Michael Vogt (mvo) wrote :

I started looking into this today. We have two options:

1. add seed.yaml support for classic
- you can put your snaps into seed.yaml and put the needed assertions into seed/assertions
- on first boot snapd will install them
- this will slow down booting
- we need to figure out what to do about seeding on "normal" classic systems, i.e. should we wait until we see a seed.yaml ? should we consider the system seeded if we don't see a seed.yaml?

2. make snapd work in chroots
- you run snapd inside a chroot
- you install everything inside a chroot
- boot is fast
- we need to figure out how to do that. right now snapd requires a running systemd for various operations including just mounting the snaps.

Michael Vogt (mvo) wrote :

Ok, here is a slightly gross idea if install inside the chroot is a hard requirement.
This will *only* work for a limited subset of snap and only currently and I have not tested it yet carefully:
- get a chroot of the system
- ensure system /dev is bind mounted for /dev/loop? access
- override /bin/systemctl with a shell script that is smart enough to understand snapd mount units and mount the squashfs snaps to /snap/$snap/$rev, return "sure, will do" (exit code 0) for everything else
- override /bin/udevadm with "printf "sure"; exit 0"
- run snapd inside the chroot manually via: sudo /lib/systemd/systemd-activate -E SNAP_REEXEC=0 /run/snapd.socket -l /run/snapd-snap.socket /usr/lib/snapd/snapd&
- run snap install core
- cross fingers
- snapd will install core and trigger a restart of itself
- run snapd (same cmdline as above) again
- snap install yoursnap [1]
- stop snapd

This will probably work, it is obviously hacky and seeding is much cleaner. However if running inside a chroot is a use-case we want to support we can improve snapd here, it will still be much more limited because of the lack of systemd, i.e. a snapd inside a chroot would not be able to run services. But for putting bits on disk we can improve snapd to behave better.

[1] the snap can probably not have hooks or anything else fancy

tags: added: cpc
Michael Vogt (mvo) wrote :

seed.yaml support is currently landing for classic now so this should allow us to go with option (1) and we can figure out how bad/good it is for boot speed.

Dan Watkins (daniel-thewatkins) wrote :

So seeding as it currently works isn't sufficient for what we need to do, because snapd runs too late in boot. To maintain parity with images as they ship now, we need commands installed by snapd to be available to, at the very least, cloud-init user-data; we may also want them to be available to services that run during the boot process.

I've been experimenting with moving snapd earlier in the boot process, just to see if that would address our issues. I've included the modified systemd unit files I was using at the bottom of this comment.

At a high level, it looks like we might be able to use this. I've been testing on GCE (because we have a snap of their SDK which we would eventually like to install instead of the Ubuntu package), and it looks like the seeding process might have minimal impact on boot; the time that apparmor, networking and the metadata collection phase of cloud-init take to start is roughly the amount of time that seeding takes. (I've attached a plot of a boot with my unit files, though you can't really see the seeding on here.)

Not being able to see the seeding from systemd's point-of-view is an issue; for a reliable boot, we would need to be able to specify that units shouldn't start until seeding is complete (i.e. After=...).

So, to summarise: in order to start using this, we will need two things:

- snapd to be configured to start as early as it possibly can, and
- snapd to expose seed-completion in a way that systemd units can relate to.

---

$ cat /lib/systemd/system/snapd.service
[Unit]
Description=Snappy daemon
Requires=snapd.socket

DefaultDependencies=no
Wants=network-pre.target
After=systemd-remount-fs.service
Before=NetworkManager.service
Before=network-pre.target
Before=shutdown.target
Before=sysinit.target
Conflicts=shutdown.target

[Service]
ExecStart=/usr/lib/snapd/snapd
EnvironmentFile=/etc/environment
Restart=always

[Install]
WantedBy=multi-user.target

$ cat /lib/systemd/system/snapd.socket
[Unit]
Description=Socket activation for snappy daemon
DefaultDependencies=no
Before=sockets.target

[Socket]
ListenStream=/run/snapd.socket
ListenStream=/run/snapd-snap.socket
SocketMode=0666
# these are the defaults, but can't hurt to specify them anyway:
SocketUser=root
SocketGroup=root

[Install]
WantedBy=sockets.target

Boris Rybalkin (ribalkin) wrote :

Out of curiosity, why not just boot the image for snap installation?
It must be easier to clean rootfs after the boot to make it look like first boot again.
Tools like docker or systemd-nspawn should be easy to automate depending on your build infrastructure.

Zygmunt Krynicki (zyga) wrote :

I'm marking this as triaged as it's not a new bug for sure.

Is this still an issue? I think the current design is that snaps are dropped into a specific place and then on boot the system is "seeded" and then things just work.

Changed in snappy:
status: New → Triaged

So pre-seeding is what we've gone ahead with; it would still be nice to be able to perform a proper installation before first boot, but I don't think it's a high priority.

As a side note, the lack of a "seeded" synchronisation point is being worked on in https://github.com/snapcore/snapd/pull/5124 as it's now a hot issue in bionic.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Bug attachments