Single Storage Pool Cannot be Used by multiple storage endpoints "no available machine matches constraints"

Bug #1783084 reported by Dmitrii Shcherbakov on 2018-07-23
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack ceph-osd charm
Medium
Unassigned
charms.ceph
Medium
Unassigned
juju
Undecided
Unassigned

Bug Description

juju create-storage-pool nvme-osd-devices maas tags=ssd
juju create-storage-pool nvme-bluestore-wal maas tags=optane

In a bundle using this works:

    storage:
      osd-devices: 'nvme-osd-devices'
      bluestore-wal: 'nvme-bluestore-wal'
      #bluestore-db: 'nvme-bluestore-wal'

this does not:
    storage:
      osd-devices: 'nvme-osd-devices'
      bluestore-wal: 'nvme-bluestore-wal'
      bluestore-db: 'nvme-bluestore-wal'

19 down pending xenial No available machine matches constraints: [('interfaces', ['ceph-access:space=6;cloud-compute:space=6;ceph:space=6;mon:space=7;compute-peer:space=6;secrets-storage:space=6;cluster:space=4;ephemeral-backend:space=6;nrpe-external-master:space=6;public:space=7;amqp:space=6;image-service:space=6;0:space=6;lxd:space=6;neutron-plugin:space=6;nova-ceilometer:space=6;shared-db:space=6;internal:space=6']), ('agent_name', ['ab00d651-2f2b-423f-88aa-7d721dbb5ee7']), ('zone', ['default']), ('tags', ['nvme']), ('storage', ['root:0,6:1(optane),7:1(optane),8:1(ssd)'])] (resolved to "interfaces=ceph-access:space=6;cloud-compute:space=6;ceph:space=6;mon:space=7;compute-peer:space=6;secrets-storage:space=6;cluster:space=4;ephemeral-backend:space=6;nrpe-external-master:space=6;public:space=7;amqp:space=6;image-service:space=6;0:space=6;lxd:space=6;neutron-plugin:space=6;nova-ceilometer:space=6;shared-db:space=6;internal:space=6 storage=root:0,6:1(optane),7:1(optane),8:1(ssd) tags=nvme zone=default")

All spaces are there and configured on respective machines (commenting out the last storage binding proved that spaces are irrelevant here):
https://pastebin.canonical.com/p/2t9dbbFG6C/

Dmitrii Shcherbakov (dmitriis) wrote :

This behavior affects charm-ceph-osd when wal and db need to be collocated.

To work around this issue, db needs to use the wal storage endpoint resolution.

https://review.openstack.org/#/q/topic:fe-collocate-if-db-unspecified+(status:open+OR+status:merged)

James Page (james-page) on 2018-07-23
Changed in charms.ceph:
status: New → Triaged
Changed in charm-ceph-osd:
status: New → Triaged
importance: Undecided → Medium
Changed in charms.ceph:
importance: Undecided → Medium
Ian Booth (wallyworld) wrote :

This looks like a maas issue - the error message comes from maas. Juju is simply asking for a machine with 2 storage devices, each tagged "optane"; in this case the default size for each pool would be 1GiB as I don't think the charm specifies a default storage size. The storage constraint passed to maas is:

root:0,6:1(optane),7:1(optane),8:1(ssd)

This corresponds to asking for 1 disk from the osd-devices pool (tagged ssd) and 2 disks from the bluestore pool (tagged optane), and a default sized root disk.

You'd need to look at the maas logs to see why the request could not be satisfied.

Changed in juju:
status: New → Incomplete
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers