kvm pod refresh fails with gluster storage pool
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
MAAS |
Expired
|
Medium
|
Unassigned |
Bug Description
I added a glusterfs pool using virsh on a KVM server which is being used as a pod. When I went to refresh the pod in MAAS it failed with a "list index out of range" error. It looks like the virsh pod driver currently only works with directory pool types because other types like gluster don't have the same "path" tag in the xml.
I was able to manually fix the issue by making the following modification in /usr/lib/
75a76
> XPATH_POOL_DIR = "/pool/source/dir"
567d567
< pool_path = evaluator(
568a569,574
> if pool_type == 'dir':
> pool_path = evaluator(
> elif pool_type == 'gluster':
> pool_path = evaluator(
> else:
> pool_path = ''
dpkg -l '*maas*'|cat
Desired=
| Status=
|/ Err?=(none)
||/ Name Version Architecture Description
+++-===
ii maas 2.6.0-7802-
ii maas-cli 2.6.0-7802-
un maas-cluster-
ii maas-common 2.6.0-7802-
ii maas-dhcp 2.6.0-7802-
un maas-dns <none> <none> (no description available)
ii maas-proxy 2.6.0-7802-
ii maas-rack-
ii maas-region-api 2.6.0-7802-
ii maas-region-
un maas-region-
un python-django-maas <none> <none> (no description available)
un python-maas-client <none> <none> (no description available)
un python-
ii python3-django-maas 2.6.0-7802-
ii python3-maas-client 2.6.0-7802-
ii python3-
summary: |
- pod refresh fails with gluster storage pool + kvm pod refresh fails with gluster storage pool |
description: | updated |
Changed in maas: | |
status: | New → Triaged |
importance: | Undecided → Medium |
Is this still an issue in a recent MAAS (3.3+)?