Single Storage Pool Cannot be Used by multiple storage endpoints "no available machine matches constraints"

Bug #1783084 reported by Dmitrii Shcherbakov
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Canonical Juju
Expired
Undecided
Unassigned
Ceph OSD Charm
Invalid
Medium
Unassigned
charms.ceph
Invalid
Medium
Unassigned

Bug Description

juju create-storage-pool nvme-osd-devices maas tags=ssd
juju create-storage-pool nvme-bluestore-wal maas tags=optane

In a bundle using this works:

    storage:
      osd-devices: 'nvme-osd-devices'
      bluestore-wal: 'nvme-bluestore-wal'
      #bluestore-db: 'nvme-bluestore-wal'

this does not:
    storage:
      osd-devices: 'nvme-osd-devices'
      bluestore-wal: 'nvme-bluestore-wal'
      bluestore-db: 'nvme-bluestore-wal'

19 down pending xenial No available machine matches constraints: [('interfaces', ['ceph-access:space=6;cloud-compute:space=6;ceph:space=6;mon:space=7;compute-peer:space=6;secrets-storage:space=6;cluster:space=4;ephemeral-backend:space=6;nrpe-external-master:space=6;public:space=7;amqp:space=6;image-service:space=6;0:space=6;lxd:space=6;neutron-plugin:space=6;nova-ceilometer:space=6;shared-db:space=6;internal:space=6']), ('agent_name', ['ab00d651-2f2b-423f-88aa-7d721dbb5ee7']), ('zone', ['default']), ('tags', ['nvme']), ('storage', ['root:0,6:1(optane),7:1(optane),8:1(ssd)'])] (resolved to "interfaces=ceph-access:space=6;cloud-compute:space=6;ceph:space=6;mon:space=7;compute-peer:space=6;secrets-storage:space=6;cluster:space=4;ephemeral-backend:space=6;nrpe-external-master:space=6;public:space=7;amqp:space=6;image-service:space=6;0:space=6;lxd:space=6;neutron-plugin:space=6;nova-ceilometer:space=6;shared-db:space=6;internal:space=6 storage=root:0,6:1(optane),7:1(optane),8:1(ssd) tags=nvme zone=default")

All spaces are there and configured on respective machines (commenting out the last storage binding proved that spaces are irrelevant here):
https://pastebin.canonical.com/p/2t9dbbFG6C/

Tags: cpe-onsite
Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote :

This behavior affects charm-ceph-osd when wal and db need to be collocated.

To work around this issue, db needs to use the wal storage endpoint resolution.

https://review.openstack.org/#/q/topic:fe-collocate-if-db-unspecified+(status:open+OR+status:merged)

James Page (james-page)
Changed in charms.ceph:
status: New → Triaged
Changed in charm-ceph-osd:
status: New → Triaged
importance: Undecided → Medium
Changed in charms.ceph:
importance: Undecided → Medium
Revision history for this message
Ian Booth (wallyworld) wrote :

This looks like a maas issue - the error message comes from maas. Juju is simply asking for a machine with 2 storage devices, each tagged "optane"; in this case the default size for each pool would be 1GiB as I don't think the charm specifies a default storage size. The storage constraint passed to maas is:

root:0,6:1(optane),7:1(optane),8:1(ssd)

This corresponds to asking for 1 disk from the osd-devices pool (tagged ssd) and 2 disks from the bluestore pool (tagged optane), and a default sized root disk.

You'd need to look at the maas logs to see why the request could not be satisfied.

Changed in juju:
status: New → Incomplete
Revision history for this message
James Page (james-page) wrote :

This is quite an old bug and MAAS + Juju storage is still not a commonly used combination (configuration of devices directly is mostly used rather than storage support).

Is this still and issue?

Changed in charms.ceph:
status: Triaged → Opinion
status: Opinion → Invalid
Changed in charm-ceph-osd:
status: Triaged → Invalid
Revision history for this message
James Page (james-page) wrote :

I've marked the charm tasks as invalid as I don't think this is actually a charm issue.

Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for juju because there has been no activity for 60 days.]

Changed in juju:
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.