auto-compose LXD VM fails due to default root disk size "0" only when also specifying an extra storage disk

Bug #1983084 reported by Trent Lloyd
18
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Canonical Juju
Triaged
Medium
Unassigned

Bug Description

When using a MAAS LXD KVM host, charms with additional storage disks fail to deploy by default.

Example: juju deploy postgresql --storage pgdata=10G

This only fails when you are trying to automatically compose a VM, but it fails not because of the additional disk but due to the root-disk size being specified as "0".

If you over-ride the root-disk size by running "juju set-model-constraints root-disk=8G" then it works as expected. It also works as expected if you are using an existing composed machine or when deploying a charm without an additional storage disk.

For whatever reason the logic seems not to understand "0" as auto-allocate the size when there is an additional storage disk and it needs to compose a new VM.

The error is:
0 pending pending focal failed to start machine 0 (failed to acquire node: No available machine matches constraints: [('agent_name', ['6ca452ff-aaef-45a6-8d5c-2889d908c609']), ('arch', ['amd64']), ('interfaces', ['1:space=1']), ('storage', ['root:0,0:10']), ('zone', ['default'])] (resolved to "arch=amd64/generic interfaces=1:space=1 storage=root:0,0:10 zone=default")), retrying in 10s (10 more attempts)

MAAS 3.2.0-11989-g.84a255c14
LXD 5.0.0-b0287c1

Can be reproduced with a MAAS lab setup as follows:
https://github.com/canonical/maas-multipass/blob/main/maas.yml (https://maas.io/tutorials/build-a-maas-and-lxd-environment-in-30-minutes-with-multipass-on-ubuntu#5-launch-the-maas-and-lxd-multipass-environment)

Changed in juju:
status: New → Triaged
importance: Undecided → Medium
milestone: none → 2.9.34
Changed in juju:
milestone: 2.9.34 → none
Revision history for this message
Canonical Juju QA Bot (juju-qa-bot) wrote :

This Medium-priority bug has not been updated in 60 days, so we're marking it Low importance. If you believe this is incorrect, please update the importance.

Changed in juju:
importance: Medium → Low
tags: added: expirebugs-bot
Revision history for this message
Trent Lloyd (lathiat) wrote (last edit ):

This is being hit in support labs by a few people, I hit it again now and one of my colleagues recently hit it (accidentally filed duplicate 2031440). Can we re-triage this?

Note: No one can update bug Importance, so the request from the juju-qa-bot is invalid. It should instead suggest to change the Status I guess.

Changed in juju:
status: Triaged → Confirmed
John A Meinel (jameinel)
Changed in juju:
importance: Low → Medium
status: Confirmed → Triaged
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.