KVM host with Ceph based storage pool
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
MAAS |
Invalid
|
Undecided
|
Unassigned |
Bug Description
Hi,
I wanted to try composing VMs with a Ceph (RBD) based storage pool.
I configured my KVM host manually to add the RBD libvirt storage pool.
It works perfect and I can create a VM with virt-manager or virt-install with no issue but in MaaS UI, when I want to add the KVM host, I get this error :
==> Error: Performing refresh failed: Failed talking to pod: list index out of range
As soon as I disable the RBD storage pool, it works.
I get these errors in MaaS logs :
2020-09-10 15:43:55 provisioningser
Traceback (most recent call last):
File "/snap/
self.
File "/snap/
self.
File "/snap/
current.result = callback(
File "/snap/
_inlineCal
--- <exception caught here> ---
File "/snap/
result = result.
File "/snap/
return g.throw(self.type, self.value, self.tb)
File "/snap/
discovered_pod = yield deferToThread(
File "/snap/
result = inContext.theWork()
File "/snap/
inContext.
File "/snap/
return self.currentCon
File "/snap/
return func(*args,**kw)
File "/snap/
result = func(*args, **kwargs)
File "/snap/
discovered
File "/snap/
pool_path = evaluator(
builtins.
What I understand is the fact that the POOL_PATH value is missing but this is normal for that kind of storage pool, there is no target/path defined in the XML configuration as you can see :
<pool type="rbd">
<name>ceph</name>
<uuid>
<capacity unit="bytes"
<allocation unit="bytes"
<available unit="bytes"
<source>
<host name="192.
<name>
<auth type="ceph" username="libvirt">
<secret uuid="0d64e8cf-
</auth>
</source>
</pool>
MaaS Version :
root@lab-
Name Version Rev Tracking Publisher Notes
maas 2.8.2-8577-
The same on maas 2.9.0-9137- g.8e920a12b