conflict when managing LXD storage with Juju 2.0

Bug #1573681 reported by James Page on 2016-04-22
34
This bug affects 6 people
Affects Status Importance Assigned to Milestone
OpenStack LXD Charm
Medium
Unassigned
juju
High
Unassigned
nova-lxd
Medium
Unassigned
lxd (Ubuntu)
Undecided
Unassigned

Bug Description

The lxd charm is designed to be placed on bare-metal alongside the nova-compute charm (or insert you charm here - its not fixed).

During configuration it will configure LXD with an appropriate storage backend based on end-user configuration supplied.

However, with Juju 2.0, its possible that Juju will also be trying to manage LXD containers on the same physical host - the act of reconfiguring the storage for LXD can break that and cause Juju's LXD container units to fail to provision.

James Page (james-page) wrote :

Note that with Juju 1.25, this is not an issue as LXC containers are created and managed quite differently to LXD containers, so no conflict occurs.

tags: added: kanban-cross-team
tags: added: landscape
tags: removed: kanban-cross-team
Changed in juju-core:
status: New → Triaged
importance: Undecided → High
tags: added: juju-release-support lxd
affects: juju-core → juju
Changed in juju:
milestone: none → 2.1.0
Anastasia (anastasia-macmood) wrote :

I am removing 2.1 milestone for "juju" as we will not be addressing this issue in 2.1.

Changed in juju:
milestone: 2.1.0 → none
Ryan Beisner (1chb1n) wrote :

Additional info from duplicate bug: https://bugs.launchpad.net/nova-lxd/+bug/1661762

Changed in charms:
status: New → Invalid
James Page (james-page) on 2017-02-23
Changed in charm-lxd:
status: New → Triaged
Changed in nova-lxd:
status: New → Triaged
importance: Undecided → Medium
Changed in charm-lxd:
importance: Undecided → Medium
no longer affects: charms
Ryan Beisner (1chb1n) on 2017-03-09
tags: added: uosci
James Page (james-page) wrote :

This bug is really founded on the fact that two things are trying to control the same resource - the configuration of storage for LXD for the containers each wants to run and manage.

Ideally these things would be decoupled as much as possible - Juju should be allowed to configure a pool of storage based on the intent of the Juju user as much as possible - I think that in 2.1.x, Juju will revert to using directory backed instances on the host OS root filesystem.

The LXD charm (in-conjunction with the nova-lxd driver) will want to configure a number of block devices to use with either ZFS or BTRFS as a second storage option and notify the nova-lxd driver which storage pool to use for nova machine containers.

I think this all points to the requirement for LXD to support multiple storage backends (with maybe a default one if a tool or user does not express any intent) with Juju + Charms updates to consume this feature and provide the right level of isolation between Juju LXD containers and Nova-LXD containers.

Anastasia (anastasia-macmood) wrote :

Looping in LXD project for comment on support of multiple storage backends as per comment # 4.

Stéphane Graber (stgraber) wrote :

LXD does support multiple storage pools as of LXD 2.9. Juju right now will create a default pool and attach it to the default profile if one isn't already detected.

OpenStack could simply define its own pool and attach it to its own profiles, keeping both storage pools separate.

Do note that this is a very major new feature and API so this will NOT be pushed to the 2.0.x branch, meaning that users who want to use that will need to use xenial-backports which will contain a LXD with support for that API once LXD 2.12 comes out next week (we've been holding on backports while we sort out some storage related issues).

Changed in lxd (Ubuntu):
status: New → Fix Released
Stéphane Graber (stgraber) wrote :

Marking the LXD task fix released since that API landed about 6 weeks ago.

John A Meinel (jameinel) wrote :

I thought with juju 2.1+ we no longer configure storage for LXD, which means we would no longer conflict.

I suppose as we introduce support for more directly controlling storage for lxd resources, we'll want to be using the new APIs and carving out our own pools.

Can we confirm if 2.1+ still experiences this failure?

Changed in juju:
status: Triaged → Incomplete
James Page (james-page) wrote :

Fixes have landed for both nova-lxd and the lxd charm to support storage pool configuration when the underlying LXD version supports it.

However, for a stock 16.04 install you won't get this feature by default; we might want to consider switching to the active backport of the latest LXD release to support this - however I'd not want todo that unless we have testing against that LXD version in the nova-lxd gate.

Changed in nova-lxd:
status: Triaged → Fix Committed
Changed in charm-lxd:
milestone: none → 17.08
status: Triaged → Fix Committed

Reviewed: https://review.openstack.org/458763
Committed: https://git.openstack.org/cgit/openstack/charm-lxd/commit/?id=6df877339c55435a3e87774d7232f4cc0f51bd35
Submitter: Jenkins
Branch: master

commit 6df877339c55435a3e87774d7232f4cc0f51bd35
Author: Chris MacNaughton <email address hidden>
Date: Fri Apr 21 10:15:14 2017 +0200

    Migrate to LXD storage pools

    We need to continue with the previous style of managing storage
    until zesty or a specified -updates channel LXD

    This additionally reinforces our suggested deployment with ZFS
    for a deployment, rather than using LVM as the defualt in
    testing.

    Closes-Bug: 1676742
    Related-Bug: 1573681
    Depends-On: I5c38766c4be66d63ef4a07eccc780fcab5973d49
    Change-Id: I3ddbd11382c34ff9200e721fa3c90fe67bdce534

James Page (james-page) on 2017-09-05
Changed in nova-lxd:
status: Fix Committed → Fix Released
James Page (james-page) on 2017-09-12
Changed in charm-lxd:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Duplicates of this bug

Other bug subscribers