Unable to Launch instance using ceph boot volume

Bug #1711603 reported by Pranita Desai
12
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Incomplete
Undecided
Aizuddin Zali

Bug Description

I have deployed Openstack Base bundle cs:~openstack-charmers-next/bundle/openstack-base-xenial-ocata-0 which deploys ceph as storage backend.
https://jujucharms.com/u/openstack-charmers-next/openstack-base-xenial-ocata/

I am unable to launch instance using ceph volume as a boot volume. Launching instance from glance image is also failing. Getting below errors in nova-compute.log:

2017-08-18 12:24:02.691 1708784 ERROR nova.virt.libvirt.driver [req-72b3771b-209c-415f-b14c-4e885a72f0b8 ef1fea1a88c744ecb96d08bb56fdc412 5a41e5c16d2c4299b23d26b87ff994aa - - -] [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] Failed to start libvirt guest
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [req-72b3771b-209c-415f-b14c-4e885a72f0b8 ef1fea1a88c744ecb96d08bb56fdc412 5a41e5c16d2c4299b23d26b87ff994aa - - -] [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] Instance failed to spawn
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] Traceback (most recent call last):
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2124, in _build_resources
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] yield resources
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1930, in _build_and_run_instance
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] block_device_info=block_device_info)
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2698, in spawn
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] destroy_disks_on_failure=True)
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5114, in _create_domain_and_network
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] destroy_disks_on_failure)
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] self.force_reraise()
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] six.reraise(self.type_, self.value, self.tb)
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5086, in _create_domain_and_network
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] post_xml_callback=post_xml_callback)
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5004, in _create_domain
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] guest.launch(pause=pause)
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/guest.py", line 145, in launch
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] self._encoded_xml, errors='ignore')
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] self.force_reraise()
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] six.reraise(self.type_, self.value, self.tb)
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/guest.py", line 140, in launch
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] return self._domain.createWithFlags(flags)
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] result = proxy_call(self._autowrap, f, *args, **kwargs)
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in proxy_call
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] rv = execute(f, *args, **kwargs)
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] six.reraise(c, e, tb)
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] rv = meth(*args, **kwargs)
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1065, in createWithFlags
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] libvirtError: internal error: process exited while connecting to monitor: nfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-16-instance-00000011/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive 'file=rbd:cinder-ceph/volume-2c04ab2b-9e9e-46c8-8995-a33cf1008331:id=cinder-ceph:key=AQA9P5VZMy4BDxAA1U8ELcourkvQE4MgaJPjbw==:auth_supported=cephx\;none:mon_host=172.16.10.6\:6789\;172.16.10.9\:6789\;172.16.10.11\:6789,format=raw,if=none,id=drive-virtio-disk0,serial=2c04ab2b-9e9e-46c8-8995-a33cf1008331,cache=none' -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=29 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:f4:e2:46,bus=pci.0,addr=0x2 -add-fd set=2,fd=31 -chardev pty,id=charserial0,logfile=/dev/fdset/2,logap

However I am able to launch instance when volume creation step is skipped.
I tried disabling firewall on all nodes of ceph and openstack. But no luck.

Can someone help?

Revision history for this message
Aizuddin Zali (mymzbe) wrote :

does the apparmor being checked?

Changed in nova:
status: New → Incomplete
assignee: nobody → Aizuddin Zali (mymzbe)
Revision history for this message
Pranita Desai (pranitad) wrote :

apparmor is enabled on all nodes.
Should I disable apparmor on all systems?

Revision history for this message
Aizuddin Zali (mymzbe) wrote :

You can try to disable it. Do the compute node able to mount ceph storage?

Revision history for this message
Pranita Desai (pranitad) wrote :
Download full text (10.0 KiB)

Disabling apparmor on all nodes doesn't helped.

Getting below errors when I tried to attach ceph volume to the ephemeral instace.

2017-08-21 13:02:15.445 170460 ERROR nova.compute.manager [instance: e699e55d-87b6-4f78-9e82-c3607b9e049f]
2017-08-21 13:02:15.570 170460 ERROR oslo_messaging.rpc.server [req-737e10cf-f65d-48b7-977e-f49f882816b9 ef1fea1a88c744ecb96d08bb56fdc412 5a41e5c16d2c4299b23d26b87ff994aa - - -] Exception during message handling
2017-08-21 13:02:15.570 170460 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2017-08-21 13:02:15.570 170460 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 155, in _process_incoming
2017-08-21 13:02:15.570 170460 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
2017-08-21 13:02:15.570 170460 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 222, in dispatch
2017-08-21 13:02:15.570 170460 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args)
2017-08-21 13:02:15.570 170460 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 192, in _do_dispatch
2017-08-21 13:02:15.570 170460 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
2017-08-21 13:02:15.570 170460 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 75, in wrapped
2017-08-21 13:02:15.570 170460 ERROR oslo_messaging.rpc.server function_name, call_dict, binary)
2017-08-21 13:02:15.570 170460 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-08-21 13:02:15.570 170460 ERROR oslo_messaging.rpc.server self.force_reraise()
2017-08-21 13:02:15.570 170460 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2017-08-21 13:02:15.570 170460 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
2017-08-21 13:02:15.570 170460 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 66, in wrapped
2017-08-21 13:02:15.570 170460 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw)
2017-08-21 13:02:15.570 170460 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 216, in decorated_function
2017-08-21 13:02:15.570 170460 ERROR oslo_messaging.rpc.server kwargs['instance'], e, sys.exc_info())
2017-08-21 13:02:15.570 170460 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-08-21 13:02:15.570 170460 ERROR oslo_messaging.rpc.server self.force_reraise()
2017-08-21 13:02:15.570 170460 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2017-08-21 13:02:15.570 170460 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
2017-08-21 13:02:15.570 170460 ERROR os...

Revision history for this message
Pranita Desai (pranitad) wrote :

Also, I am unable to mount ceph storage on compute.

Revision history for this message
Pranita Desai (pranitad) wrote :

root@vm0:~# sudo mount -t ceph 192.168.0.1:6789:/ /mnt/
mount error 110 = Connection timed out

Revision history for this message
Aizuddin Zali (mymzbe) wrote :

Does the ceph server 192.168.0.1 is listening on 6789? On the 192.168.0.1, check listening port, 'netstat -nutlp | grep 6789'.

Seems like nova unable to provision the requested ceph disk.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.