cannot attach volume to instance

Bug #1855922 reported by BN
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
kolla-ansible
Invalid
Undecided
Unassigned

Bug Description

**Bug Report**

What happened: External ceph is used, volumes can be created, instances can be created and started; Once I create instance with volume there is an error. Furthermore, if I am trying to attach volume to the instance, there is Reserved/Available status. When I shut off instance, attach the volume and trying to start the instance again, instance cant be started.

**Environment**:
* OS (e.g. from /etc/os-release): Ubuntu
* Kernel (e.g. `uname -a`): Linux shpak 4.15.0-70-generic #79-Ubuntu SMP Tue Nov 12 10:36:11 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
* Docker version if applicable (e.g. `docker version`): Docker version 19.03.5, build 633a0ea838
* Docker image Install type (source/binary): source
* Docker image distribution: Ubuntu
* Are you using official images from Docker Hub or self built? Docker hub
* Share your inventory file, globals.yml and other configuration files if relevant

kolla_base_distro: "ubuntu"
kolla_install_type: "source"
openstack_release: "stein"
kolla_internal_vip_address: "10.0.225.254"
kolla_external_vip_address: "xxxxxxxxxxxxxxxx"
keepalived_virtual_router_id: "101"
#enable_murano: "yes"
network_interface: "enp2s0f0"
neutron_external_interface: "enp2s0f1"
enable_neutron_provider_networks: "yes"
neutron_tenant_network_types: "vxlan,vlan,flat"
#enable_barbican: "yes"
enable_cinder: "yes"
enable_cinder_backup: "yes"
enable_fluentd: "yes"
enable_masakari: "yes"
#enable_zun: "yes"
#enable_kuryr: "yes"
#enable_etcd: "yes"
#docker_configure_for_zun: "yes"
enable_magnum: "yes"
#enable_heat: "no"
enable_ceph: "no"
glance_backend_ceph: "yes"
cinder_backend_ceph: "yes"
nova_backend_ceph: "yes"
glance_enable_rolling_upgrade: "no"
barbican_crypto_plugin: "simple_crypto"
barbican_library_path: "/usr/lib/libCryptoki2_64.so"
ironic_dnsmasq_dhcp_range:
tempest_image_id:
tempest_flavor_ref_id:
tempest_public_network_id:
tempest_floating_network_name:
horizon_port: 48000
------------------------------------------------------------------------

cinder service-list
+------------------+-------------------+------+---------+-------+----------------------------+---------+-----------------+---------------+
| Binary | Host | Zone | Status | State | Updated_at | Cluster | Disabled Reason | Backend State |
+------------------+-------------------+------+---------+-------+----------------------------+---------+-----------------+---------------+
| cinder-backup | shpak | nova | enabled | up | 2019-12-10T19:36:09.000000 | - | - | |
| cinder-backup | soroka | nova | enabled | up | 2019-12-10T19:36:09.000000 | - | - | |
| cinder-backup | sova | nova | enabled | up | 2019-12-10T19:36:15.000000 | - | - | |
| cinder-scheduler | shpak | nova | enabled | up | 2019-12-10T19:36:10.000000 | - | - | |
| cinder-volume | rbd:volumes@rbd-1 | nova | enabled | up | 2019-12-10T19:36:11.000000 | - | - | up |
+------------------+-------------------+------+---------+-------+----------------------------+---------+-----------------+---------------+

nova service-list
+--------------------------------------+------------------+--------+----------+---------+-------+----------------------------+-----------------+-------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | Forced down |
+--------------------------------------+------------------+--------+----------+---------+-------+----------------------------+-----------------+-------------+
| a150b22d-d143-4515-9d3f-b3892dd7cbeb | nova-conductor | shpak | internal | enabled | up | 2019-12-10T19:36:35.000000 | - | False |
| c0747a6b-dc39-48c7-a7e5-a5c3197da4da | nova-scheduler | shpak | internal | enabled | up | 2019-12-10T19:36:32.000000 | - | False |
| 1617c827-ff6e-4e26-8550-3d2031637dcc | nova-consoleauth | shpak | internal | enabled | up | 2019-12-10T19:36:26.000000 | - | False |
| a15bb62e-2859-4690-bdff-25a215975e8a | nova-compute | shpak | nova | enabled | up | 2019-12-10T19:36:34.000000 | - | False |
| 0f543a56-699f-419b-89c2-92b5d87e4c0b | nova-compute | soroka | nova | enabled | up | 2019-12-10T19:36:26.000000 | - | False |
| f63bc35d-222a-4d67-8a2f-1701ffd30344 | nova-compute | sova | nova | enabled | up | 2019-12-10T19:36:27.000000 | - | False |
+--------------------------------------+------------------+--------+----------+---------+-------+----------------------------+-----------------+-------------+

Errors:

nova-conductor log:

2019-12-10 21:25:48.383 22 ERROR nova.scheduler.utils [req-1ab1c5cc-4033-406e-b8d7-374d1b892e1d e850ce6d2aa945cd8f2220972a66c2a1 e87e623e917b422c9c5160b119d3a528 - default default] [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] Error from last host: soroka (node soroka): [u'Traceback (most recent call last):\n', u' File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/compute/manager.py", line 1984, in _do_build_and_run_instance\n filter_properties, request_spec)\n', u' File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/compute/manager.py", line 2354, in _build_and_run_instance\n instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance e15e4118-fb29-4bf6-b52a-dcff91cecf3b was re-scheduled: internal error: qemu unexpectedly closed the monitor: 2019-12-10 21:25:44.603508 7fe04e56c500 -1 Errors while parsing config file!\n2019-12-10 21:25:44.603672 7fe04e56c500 -1 parse_file: cannot open /etc/ceph/ceph.conf: (13) Permission denied\n2019-12-10T19:25:44.621938Z qemu-system-x86_64: -drive file=rbd:volumes/volume-83b099d2-90a4-4a41-aa54-ffe4cd03bb85:id=cinder:auth_supported=cephx\\;none:mon_host=10.0.1.1\\:6789,file.password-secret=virtio-disk0-secret0,format=raw,if=none,id=drive-virtio-disk0,cache=none,discard=unmap: error connecting: Operation not permitted\n']
2019-12-10 21:25:48.385 22 WARNING nova.scheduler.utils [req-1ab1c5cc-4033-406e-b8d7-374d1b892e1d e850ce6d2aa945cd8f2220972a66c2a1 e87e623e917b422c9c5160b119d3a528 - default default] Failed to compute_task_build_instances: Exceeded maximum number of retries. Exhausted all hosts available for retrying build failures for instance e15e4118-fb29-4bf6-b52a-dcff91cecf3b.: MaxRetriesExceeded: Exceeded maximum number of retries. Exhausted all hosts available for retrying build failures for instance e15e4118-fb29-4bf6-b52a-dcff91cecf3b.
2019-12-10 21:25:48.386 22 WARNING nova.scheduler.utils [req-1ab1c5cc-4033-406e-b8d7-374d1b892e1d e850ce6d2aa945cd8f2220972a66c2a1 e87e623e917b422c9c5160b119d3a528 - default default] [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] Setting instance to ERROR state.: MaxRetriesExceeded: Exceeded maximum number of retries. Exhausted all hosts available for retrying build failures for instance e15e4118-fb29-4bf6-b52a-dcff91cecf3b.

nova-compute.log

2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [req-1ab1c5cc-4033-406e-b8d7-374d1b892e1d e850ce6d2aa945cd8f2220972a66c2a1 e87e623e917b422c9c5160b119d3a528 - default default] [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] Instance failed to spawn: libvirtError: internal error: qemu unexpectedly closed the monitor: 2019-12-10 21:25:35.536875 7f3934f3b500 -1 Errors while parsing config file!
2019-12-10 21:25:35.536879 7f3934f3b500 -1 parse_file: cannot open /etc/ceph/ceph.conf: (13) Permission denied
2019-12-10T19:25:35.550071Z qemu-system-x86_64: -drive file=rbd:volumes/volume-83b099d2-90a4-4a41-aa54-ffe4cd03bb85:id=cinder:auth_supported=cephx\;none:mon_host=10.0.1.1\:6789,file.password-secret=virtio-disk0-secret0,format=raw,if=none,id=drive-virtio-disk0,cache=none,discard=unmap: error connecting: Operation not permitted
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] Traceback (most recent call last):
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/compute/manager.py", line 2495, in _build_resources
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] yield resources
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/compute/manager.py", line 2256, in _build_and_run_instance
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] block_device_info=block_device_info)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3204, in spawn
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] destroy_disks_on_failure=True)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5727, in _create_domain_and_network
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] destroy_disks_on_failure)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] self.force_reraise()
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] six.reraise(self.type_, self.value, self.tb)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5696, in _create_domain_and_network
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] post_xml_callback=post_xml_callback)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5630, in _create_domain
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] guest.launch(pause=pause)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 144, in launch
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] self._encoded_xml, errors='ignore')
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] self.force_reraise()
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] six.reraise(self.type_, self.value, self.tb)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 139, in launch
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] return self._domain.createWithFlags(flags)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/eventlet/tpool.py", line 190, in doit
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] result = proxy_call(self._autowrap, f, *args, **kwargs)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/eventlet/tpool.py", line 148, in proxy_call
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] rv = execute(f, *args, **kwargs)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/eventlet/tpool.py", line 129, in execute
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] six.reraise(c, e, tb)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] rv = meth(*args, **kwargs)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/usr/lib/python2.7/site-packages/libvirt.py", line 1110, in createWithFlags
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] libvirtError: internal error: qemu unexpectedly closed the monitor: 2019-12-10 21:25:35.536875 7f3934f3b500 -1 Errors while parsing config file!
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] 2019-12-10 21:25:35.536879 7f3934f3b500 -1 parse_file: cannot open /etc/ceph/ceph.conf: (13) Permission denied
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] 2019-12-10T19:25:35.550071Z qemu-system-x86_64: -drive file=rbd:volumes/volume-83b099d2-90a4-4a41-aa54-ffe4cd03bb85:id=cinder:auth_supported=cephx\;none:mon_host=10.0.1.1\:6789,file.password-secret=virtio-disk0-secret0,format=raw,if=none,id=drive-virtio-disk0,cache=none,discard=unmap: error connecting: Operation not permitted
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b]
2019-12-10 21:25:36.105 6 INFO nova.compute.manager [req-1ab1c5cc-4033-406e-b8d7-374d1b892e1d e850ce6d2aa945cd8f2220972a66c2a1 e87e623e917b422c9c5160b119d3a528 - default default] [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] Terminating instance

cinder-volume.conf:

[DEFAULT]
enabled_backends=rbd-1

[rbd-1]
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=cinder
backend_host=rbd:volumes
rbd_pool=volumes
volume_backend_name=rbd-1
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_secret_uuid=0ac2495e-e40d-41fc-bfd3-4f42cbadf19a

grep rbd_secret_uuid /etc/kolla/passwords.yml
cinder_rbd_secret_uuid: 2c800a7f-622a-4ff2-b661-d36d8edc2001
rbd_secret_uuid: 0ac2495e-e40d-41fc-bfd3-4f42cbadf19a

Revision history for this message
BN (zatoichy) wrote :

Ceph keyrings were created like that:

sudo ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' mgr 'allow *'
sudo ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd-read-only pool=images' mgr 'allow *'
sudo ceph auth get-or-create client.cinder-backup mon 'profile rbd' osd 'profile rbd pool=backups' mgr 'allow *'
sudo ceph auth get-or-create client.nova mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images' mgr 'allow *'
sudo ceph auth get client.glance -o /etc/kolla/config/glance/ceph.client.glance.keyring
sudo ceph auth get client.cinder -o /etc/kolla/config/cinder/cinder-volume/ceph.client.cinder.keyring
sudo ceph auth get client.cinder -o /etc/kolla/config/cinder/cinder-backup/ceph.client.cinder.keyring
sudo ceph auth get client.cinder -o /etc/kolla/config/nova/ceph.client.cinder.keyring
sudo ceph auth get client.cinder-backup -o /etc/kolla/config/cinder/cinder-backup/ceph.client.cinder-backup.keyring
sudo ceph auth get client.nova -o /etc/kolla/config/nova/ceph.client.nova.keyring

Revision history for this message
Radosław Piliszek (yoctozepto) wrote :

Did you provide ceph.conf for nova? (e.g. in /etc/kolla/config/nova/ceph.conf)

It looks like libvirt cannot act upon ceph because it has no idea where to look.

Ref: https://docs.openstack.org/kolla-ansible/train/reference/storage/external-ceph-guide.html#nova

Changed in kolla-ansible:
status: New → Incomplete
Revision history for this message
BN (zatoichy) wrote :

Hi,

Thank you very much for your response.

root@shpak:/etc/kolla/config/nova# ls
ceph.client.cinder.keyring nova-compute.conf
ceph.client.nova.keyring nova.conf
ceph.conf
root@shpak:/etc/kolla/config/nova# cat ceph.conf
[global]
fsid = 02e2f87b-7687-4616-bff0-d2ee241bf6bf
mon_initial_members = shpak
mon_host = 10.0.1.1
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
root@shpak:/etc/kolla/config/nova# cd ..
root@shpak:/etc/kolla/config# cd glance
root@shpak:/etc/kolla/config/glance# cat ceph.conf
[global]
fsid = 02e2f87b-7687-4616-bff0-d2ee241bf6bf
mon_initial_members = shpak
mon_host = 10.0.1.1
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
root@shpak:/etc/kolla/config/glance#

ceph.conf in nova as same as in other services. However, I do have keyrings for both cinder and nova in that folder. Nova client is created in ceph as well.

Mark Goddard (mgoddard)
Changed in kolla-ansible:
status: Incomplete → Triaged
importance: Undecided → Medium
Changed in kolla-ansible:
assignee: nobody → Mark Goddard (mgoddard)
status: Triaged → In Progress
Revision history for this message
Mark Goddard (mgoddard) wrote :
Revision history for this message
Radosław Piliszek (yoctozepto) wrote :

Does the fix above help you? @BN

Revision history for this message
BN (zatoichy) wrote :

Hi,

Is there any documentation how can I apply those changes via kolla-ansible upgrade? Or shall I update that role in ansible manually and run reconfigure command?

Thank you

Revision history for this message
BN (zatoichy) wrote :

I havent tried it yet. Will be able to do that in the evening

Revision history for this message
Radosław Piliszek (yoctozepto) wrote :

Fastest is to get inside each nova_libvirt container and:
  chown nova /etc/ceph/ceph.conf

Otherwise change this line in your k-a installation and run either deploy or reconfigure.

Revision history for this message
Radosław Piliszek (yoctozepto) wrote :

I do wonder, however, how it is that we don't get this error with non-kvm qemu on Ubuntu and on neither qemu on CentOS. Not reproduced myself.

Revision history for this message
BN (zatoichy) wrote :

Okay, the issue was not related to user permissions; I ve tried to change root to nova but it did not solve my issue. Then I checked external ceph guide again and I ve noticed that I was using some other guide therefore, I was using rbd_secret_uuid instead of cinder_rbd_secret_uuid in cinder-volume.conf file. Once I ve changed that to cinder_rbd_secret_uuid it solved the issue so the instance can be started with volumes created. Also, volumes can be attached to instances as well.

Thank you

P.S. Just quick question: is it possible that if you run rbd init vms [from external ceph setup], and you already have some instances running in openstack - there is possibility that they can be automatically deleted?

Revision history for this message
Radosław Piliszek (yoctozepto) wrote :

If they are running off this pool then it must have already been inited.
If they don't use ceph already, then they are fine too.

no longer affects: kolla-ansible/rocky
no longer affects: kolla-ansible/stein
no longer affects: kolla-ansible/train
no longer affects: kolla-ansible/ussuri
Changed in kolla-ansible:
importance: Medium → Undecided
assignee: Mark Goddard (mgoddard) → nobody
milestone: 10.0.0 → none
Changed in kolla-ansible:
assignee: nobody → yao ning (mslovy11022)
status: Invalid → In Progress
Revision history for this message
Radosław Piliszek (yoctozepto) wrote :

(support request)

Changed in kolla-ansible:
status: In Progress → Invalid
assignee: yao ning (mslovy11022) → nobody
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.