**Bug Report**
What happened: External ceph is used, volumes can be created, instances can be created and started; Once I create instance with volume there is an error. Furthermore, if I am trying to attach volume to the instance, there is Reserved/Available status. When I shut off instance, attach the volume and trying to start the instance again, instance cant be started.
**Environment**:
* OS (e.g. from /etc/os-release): Ubuntu
* Kernel (e.g. `uname -a`): Linux shpak 4.15.0-70-generic #79-Ubuntu SMP Tue Nov 12 10:36:11 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
* Docker version if applicable (e.g. `docker version`): Docker version 19.03.5, build 633a0ea838
* Docker image Install type (source/binary): source
* Docker image distribution: Ubuntu
* Are you using official images from Docker Hub or self built? Docker hub
* Share your inventory file, globals.yml and other configuration files if relevant
kolla_base_distro: "ubuntu"
kolla_install_type: "source"
openstack_release: "stein"
kolla_internal_vip_address: "10.0.225.254"
kolla_external_vip_address: "xxxxxxxxxxxxxxxx"
keepalived_virtual_router_id: "101"
#enable_murano: "yes"
network_interface: "enp2s0f0"
neutron_external_interface: "enp2s0f1"
enable_neutron_provider_networks: "yes"
neutron_tenant_network_types: "vxlan,vlan,flat"
#enable_barbican: "yes"
enable_cinder: "yes"
enable_cinder_backup: "yes"
enable_fluentd: "yes"
enable_masakari: "yes"
#enable_zun: "yes"
#enable_kuryr: "yes"
#enable_etcd: "yes"
#docker_configure_for_zun: "yes"
enable_magnum: "yes"
#enable_heat: "no"
enable_ceph: "no"
glance_backend_ceph: "yes"
cinder_backend_ceph: "yes"
nova_backend_ceph: "yes"
glance_enable_rolling_upgrade: "no"
barbican_crypto_plugin: "simple_crypto"
barbican_library_path: "/usr/lib/libCryptoki2_64.so"
ironic_dnsmasq_dhcp_range:
tempest_image_id:
tempest_flavor_ref_id:
tempest_public_network_id:
tempest_floating_network_name:
horizon_port: 48000
------------------------------------------------------------------------
cinder service-list
+------------------+-------------------+------+---------+-------+----------------------------+---------+-----------------+---------------+
| Binary | Host | Zone | Status | State | Updated_at | Cluster | Disabled Reason | Backend State |
+------------------+-------------------+------+---------+-------+----------------------------+---------+-----------------+---------------+
| cinder-backup | shpak | nova | enabled | up | 2019-12-10T19:36:09.000000 | - | - | |
| cinder-backup | soroka | nova | enabled | up | 2019-12-10T19:36:09.000000 | - | - | |
| cinder-backup | sova | nova | enabled | up | 2019-12-10T19:36:15.000000 | - | - | |
| cinder-scheduler | shpak | nova | enabled | up | 2019-12-10T19:36:10.000000 | - | - | |
| cinder-volume | rbd:volumes@rbd-1 | nova | enabled | up | 2019-12-10T19:36:11.000000 | - | - | up |
+------------------+-------------------+------+---------+-------+----------------------------+---------+-----------------+---------------+
nova service-list
+--------------------------------------+------------------+--------+----------+---------+-------+----------------------------+-----------------+-------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | Forced down |
+--------------------------------------+------------------+--------+----------+---------+-------+----------------------------+-----------------+-------------+
| a150b22d-d143-4515-9d3f-b3892dd7cbeb | nova-conductor | shpak | internal | enabled | up | 2019-12-10T19:36:35.000000 | - | False |
| c0747a6b-dc39-48c7-a7e5-a5c3197da4da | nova-scheduler | shpak | internal | enabled | up | 2019-12-10T19:36:32.000000 | - | False |
| 1617c827-ff6e-4e26-8550-3d2031637dcc | nova-consoleauth | shpak | internal | enabled | up | 2019-12-10T19:36:26.000000 | - | False |
| a15bb62e-2859-4690-bdff-25a215975e8a | nova-compute | shpak | nova | enabled | up | 2019-12-10T19:36:34.000000 | - | False |
| 0f543a56-699f-419b-89c2-92b5d87e4c0b | nova-compute | soroka | nova | enabled | up | 2019-12-10T19:36:26.000000 | - | False |
| f63bc35d-222a-4d67-8a2f-1701ffd30344 | nova-compute | sova | nova | enabled | up | 2019-12-10T19:36:27.000000 | - | False |
+--------------------------------------+------------------+--------+----------+---------+-------+----------------------------+-----------------+-------------+
Errors:
nova-conductor log:
2019-12-10 21:25:48.383 22 ERROR nova.scheduler.utils [req-1ab1c5cc-4033-406e-b8d7-374d1b892e1d e850ce6d2aa945cd8f2220972a66c2a1 e87e623e917b422c9c5160b119d3a528 - default default] [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] Error from last host: soroka (node soroka): [u'Traceback (most recent call last):\n', u' File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/compute/manager.py", line 1984, in _do_build_and_run_instance\n filter_properties, request_spec)\n', u' File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/compute/manager.py", line 2354, in _build_and_run_instance\n instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance e15e4118-fb29-4bf6-b52a-dcff91cecf3b was re-scheduled: internal error: qemu unexpectedly closed the monitor: 2019-12-10 21:25:44.603508 7fe04e56c500 -1 Errors while parsing config file!\n2019-12-10 21:25:44.603672 7fe04e56c500 -1 parse_file: cannot open /etc/ceph/ceph.conf: (13) Permission denied\n2019-12-10T19:25:44.621938Z qemu-system-x86_64: -drive file=rbd:volumes/volume-83b099d2-90a4-4a41-aa54-ffe4cd03bb85:id=cinder:auth_supported=cephx\\;none:mon_host=10.0.1.1\\:6789,file.password-secret=virtio-disk0-secret0,format=raw,if=none,id=drive-virtio-disk0,cache=none,discard=unmap: error connecting: Operation not permitted\n']
2019-12-10 21:25:48.385 22 WARNING nova.scheduler.utils [req-1ab1c5cc-4033-406e-b8d7-374d1b892e1d e850ce6d2aa945cd8f2220972a66c2a1 e87e623e917b422c9c5160b119d3a528 - default default] Failed to compute_task_build_instances: Exceeded maximum number of retries. Exhausted all hosts available for retrying build failures for instance e15e4118-fb29-4bf6-b52a-dcff91cecf3b.: MaxRetriesExceeded: Exceeded maximum number of retries. Exhausted all hosts available for retrying build failures for instance e15e4118-fb29-4bf6-b52a-dcff91cecf3b.
2019-12-10 21:25:48.386 22 WARNING nova.scheduler.utils [req-1ab1c5cc-4033-406e-b8d7-374d1b892e1d e850ce6d2aa945cd8f2220972a66c2a1 e87e623e917b422c9c5160b119d3a528 - default default] [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] Setting instance to ERROR state.: MaxRetriesExceeded: Exceeded maximum number of retries. Exhausted all hosts available for retrying build failures for instance e15e4118-fb29-4bf6-b52a-dcff91cecf3b.
nova-compute.log
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [req-1ab1c5cc-4033-406e-b8d7-374d1b892e1d e850ce6d2aa945cd8f2220972a66c2a1 e87e623e917b422c9c5160b119d3a528 - default default] [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] Instance failed to spawn: libvirtError: internal error: qemu unexpectedly closed the monitor: 2019-12-10 21:25:35.536875 7f3934f3b500 -1 Errors while parsing config file!
2019-12-10 21:25:35.536879 7f3934f3b500 -1 parse_file: cannot open /etc/ceph/ceph.conf: (13) Permission denied
2019-12-10T19:25:35.550071Z qemu-system-x86_64: -drive file=rbd:volumes/volume-83b099d2-90a4-4a41-aa54-ffe4cd03bb85:id=cinder:auth_supported=cephx\;none:mon_host=10.0.1.1\:6789,file.password-secret=virtio-disk0-secret0,format=raw,if=none,id=drive-virtio-disk0,cache=none,discard=unmap: error connecting: Operation not permitted
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] Traceback (most recent call last):
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/compute/manager.py", line 2495, in _build_resources
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] yield resources
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/compute/manager.py", line 2256, in _build_and_run_instance
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] block_device_info=block_device_info)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3204, in spawn
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] destroy_disks_on_failure=True)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5727, in _create_domain_and_network
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] destroy_disks_on_failure)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] self.force_reraise()
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] six.reraise(self.type_, self.value, self.tb)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5696, in _create_domain_and_network
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] post_xml_callback=post_xml_callback)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5630, in _create_domain
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] guest.launch(pause=pause)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 144, in launch
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] self._encoded_xml, errors='ignore')
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] self.force_reraise()
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] six.reraise(self.type_, self.value, self.tb)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 139, in launch
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] return self._domain.createWithFlags(flags)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/eventlet/tpool.py", line 190, in doit
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] result = proxy_call(self._autowrap, f, *args, **kwargs)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/eventlet/tpool.py", line 148, in proxy_call
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] rv = execute(f, *args, **kwargs)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/eventlet/tpool.py", line 129, in execute
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] six.reraise(c, e, tb)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] rv = meth(*args, **kwargs)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] File "/usr/lib/python2.7/site-packages/libvirt.py", line 1110, in createWithFlags
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] libvirtError: internal error: qemu unexpectedly closed the monitor: 2019-12-10 21:25:35.536875 7f3934f3b500 -1 Errors while parsing config file!
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] 2019-12-10 21:25:35.536879 7f3934f3b500 -1 parse_file: cannot open /etc/ceph/ceph.conf: (13) Permission denied
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] 2019-12-10T19:25:35.550071Z qemu-system-x86_64: -drive file=rbd:volumes/volume-83b099d2-90a4-4a41-aa54-ffe4cd03bb85:id=cinder:auth_supported=cephx\;none:mon_host=10.0.1.1\:6789,file.password-secret=virtio-disk0-secret0,format=raw,if=none,id=drive-virtio-disk0,cache=none,discard=unmap: error connecting: Operation not permitted
2019-12-10 21:25:36.104 6 ERROR nova.compute.manager [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b]
2019-12-10 21:25:36.105 6 INFO nova.compute.manager [req-1ab1c5cc-4033-406e-b8d7-374d1b892e1d e850ce6d2aa945cd8f2220972a66c2a1 e87e623e917b422c9c5160b119d3a528 - default default] [instance: e15e4118-fb29-4bf6-b52a-dcff91cecf3b] Terminating instance
cinder-volume.conf:
[DEFAULT]
enabled_backends=rbd-1
[rbd-1]
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=cinder
backend_host=rbd:volumes
rbd_pool=volumes
volume_backend_name=rbd-1
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_secret_uuid=0ac2495e-e40d-41fc-bfd3-4f42cbadf19a
grep rbd_secret_uuid /etc/kolla/passwords.yml
cinder_rbd_secret_uuid: 2c800a7f-622a-4ff2-b661-d36d8edc2001
rbd_secret_uuid: 0ac2495e-e40d-41fc-bfd3-4f42cbadf19a
Ceph keyrings were created like that:
sudo ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' mgr 'allow *' cinder- backup mon 'profile rbd' osd 'profile rbd pool=backups' mgr 'allow *' config/ glance/ ceph.client. glance. keyring config/ cinder/ cinder- volume/ ceph.client. cinder. keyring config/ cinder/ cinder- backup/ ceph.client. cinder. keyring config/ nova/ceph. client. cinder. keyring cinder- backup -o /etc/kolla/ config/ cinder/ cinder- backup/ ceph.client. cinder- backup. keyring config/ nova/ceph. client. nova.keyring
sudo ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd-read-only pool=images' mgr 'allow *'
sudo ceph auth get-or-create client.
sudo ceph auth get-or-create client.nova mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images' mgr 'allow *'
sudo ceph auth get client.glance -o /etc/kolla/
sudo ceph auth get client.cinder -o /etc/kolla/
sudo ceph auth get client.cinder -o /etc/kolla/
sudo ceph auth get client.cinder -o /etc/kolla/
sudo ceph auth get client.
sudo ceph auth get client.nova -o /etc/kolla/