LXD & block storage: unit agent never runs

Bug #1799989 reported by Cory Johns
14
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Canonical Juju
Triaged
Low
Unassigned

Bug Description

See this trivial test charm: https://github.com/johnsca/test-block-storage-lxd

Deploying that on the LXD provider without explicitly attaching storage causes the unit agent to be stuck at "allocating": https://pastebin.ubuntu.com/p/krPz5VMf8q/

Per discussion with Ian, it seems that this has to do with the security concerns around passing block devices into containers, as discussed here: https://discuss.linuxcontainers.org/t/providing-access-to-loop-and-other-devices-in-containers/ In this case, though, since Juju would be creating the loop device specifically for the charm to use, it seems safe and reasonable to enable it to work.

Changed in juju:
status: New → Triaged
importance: Undecided → High
milestone: none → 2.5-beta1
assignee: nobody → Joseph Phillips (manadart)
Revision history for this message
Joseph Phillips (manadart) wrote :

Machine log looks like this:
https://pastebin.ubuntu.com/p/vQCtphkh5k/

Revision history for this message
Joseph Phillips (manadart) wrote :

Interestingly, the command run in the container by the storage provisioner is:

    fallocate -l 1024MiB /var/lib/juju/storage/loop/volume-0-0"

But if one uses:

    fallocate -x -l 1024MiB /var/lib/juju/storage/loop/volume-0-0

then it works.

From fallocate help:

    -x, --posix use posix_fallocate(3) instead of fallocate(2)

Revision history for this message
Joseph Phillips (manadart) wrote :

What is expected behaviour for the test charm in question?

1) That a block device will be created and made available inside the container or;
2) that a block device will be created on the host and exposed inside the container?

Revision history for this message
John A Meinel (jameinel) wrote : Re: [Bug 1799989] Re: LXD & block storage: unit agent never runs

for LXD block storage I'm pretty sure we want to be mounting storage from
the host so that we get the same properties of 'storage that can outlive
the instance'. Otherwise you'd just use somewhere on the instance's root
dir.

John
=:->

On Wed, Oct 31, 2018, 17:31 Joseph Phillips <<email address hidden>
wrote:

> What is expected behaviour for the test charm in question?
>
> 1) That a block device will be created and made available inside the
> container or;
> 2) that a block device will be created on the host and exposed inside the
> container?
>
> --
> You received this bug notification because you are subscribed to juju.
> Matching subscriptions: juju bugs
> https://bugs.launchpad.net/bugs/1799989
>
> Title:
> LXD & block storage: unit agent never runs
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/juju/+bug/1799989/+subscriptions
>

Revision history for this message
Ian Booth (wallyworld) wrote :

I misremembered the exact mechanism needed. Here's some (slightly old) doc

https://docs.jujucharms.com/2.4/en/charms-storage#loop-devices-and-lxd

Revision history for this message
John A Meinel (jameinel) wrote :

Reading through the links from Cory and Ian makes it clear that doing so is poking lots of holes in the security of containers. Specifically, the pool of loopback devices is shared on the host, and by default LXD isn't allowed to mount them (there is a security risk of mounting arbitrary filesystems, because you are passing untrusted data directly to the kernel.)

I think we *could* implement support for this by allocating a blob on the host machine, and then turning that into a loop device, and then passing the loop device into the container.

However, while we *can* work through all the details of doing so in a safe fashion, I'd wonder if it is worth the time to implement it. Do we have a *production* case where we want to support block devices in LXD containers? Or is it just "I want to play with a charm that could use a block storage".

It seems like it could definitely be worth giving a better error message "block storage not supported on LXD", but unless we have real use cases, I'd rather not spend a bunch of time getting it right, only to never have it be actively used.

Changed in juju:
milestone: 2.5-beta1 → 2.5-beta2
Ian Booth (wallyworld)
Changed in juju:
milestone: 2.5-beta2 → 2.5-beta3
Changed in juju:
milestone: 2.5-beta3 → 2.5-rc1
Changed in juju:
milestone: 2.5-rc1 → 2.5.1
Ian Booth (wallyworld)
Changed in juju:
milestone: 2.5.1 → 2.5.2
Changed in juju:
assignee: Joseph Phillips (manadart) → nobody
Changed in juju:
milestone: 2.5.2 → 2.5.3
Changed in juju:
milestone: 2.5.3 → 2.5.4
Changed in juju:
milestone: 2.5.4 → 2.5.5
Changed in juju:
milestone: 2.5.6 → 2.5.8
Changed in juju:
milestone: 2.5.8 → 2.5.9
Revision history for this message
Anastasia (anastasia-macmood) wrote :

Removing from a milestone as this work will not be done in 2.5 series.

Changed in juju:
milestone: 2.5.9 → none
Revision history for this message
Canonical Juju QA Bot (juju-qa-bot) wrote :

This bug has not been updated in 2 years, so we're marking it Low importance. If you believe this is incorrect, please update the importance.

Changed in juju:
importance: High → Low
tags: added: expirebugs-bot
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.