Nova ceph backend -Unable to boot vm with personality

Bug #1409300 reported by Benny Kopilov
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cinder
New
Undecided
Unassigned

Bug Description

Hi ,
Rhel 7.0 with rhos5 openstack .
Configured openstack with nova , glance and cinder with a ceph backed.

When trying to boot vm with personality , boot vm fails .

from tempest :
class ServersTestJSON(base.BaseV2ComputeTest):
    disk_config = 'AUTO'

    @classmethod
    def resource_setup(cls):
        cls.prepare_instance_network()
        super(ServersTestJSON, cls).resource_setup()
        cls.meta = {'hello': 'world'}
        cls.accessIPv4 = '1.1.1.1'
        cls.accessIPv6 = '0000:0000:0000:0000:0000:babe:220.12.22.2'
        cls.name = data_utils.rand_name('server')
        file_contents = 'This is a test file.'
        personality = [{'path': '/test.txt',
                       'contents': base64.b64encode(file_contents)}]
        cls.client = cls.servers_client
        cls.network_client = cls.os.network_client
        cli_resp = cls.create_test_server(name=cls.name,
                                          meta=cls.meta,
                                          accessIPv4=cls.accessIPv4,
                                          accessIPv6=cls.accessIPv6,
                                          personality=personality,
                                          disk_config=cls.disk_config)

Ceph configuration : attached .

2015-01-10 19:06:16.859 15202 DEBUG nova.openstack.common.processutils [-] Running cmd (subprocess): rbd import --pool automation-cinder /var/lib/nova/instances/_base/e239f7b321b8b72ed0335cccd4adcf206e1fa487 2e01db14-fbe5-49b8-b296-3b39b547ad1e_disk --new-format --id automation-cinder --conf /etc/ceph/ceph.conf execute /usr/lib/python2.7/site-packages/nova/openstack/common/processutils.py:154
2015-01-10 19:06:18.381 15202 DEBUG nova.openstack.common.processutils [-] Result was 0 execute /usr/lib/python2.7/site-packages/nova/openstack/common/processutils.py:187
2015-01-10 19:06:21.473 15202 DEBUG nova.virt.libvirt.rbd_utils [req-e8fc0308-ae54-4e27-9624-8369b947b5b6 d7413f7fcaf94ad5a6f513859c3a9d9e 45ff28360faa4ea9a15ea363a98a9ed1] resizing rbd image 2e01db14-fbe5-49b8-b296-3b39b547ad1e_disk to 4294967296 resize /usr/lib/python2.7/site-packages/nova/virt/libvirt/rbd_utils.py:224
2015-01-10 19:06:24.589 15202 DEBUG nova.virt.disk.api [req-e8fc0308-ae54-4e27-9624-8369b947b5b6 d7413f7fcaf94ad5a6f513859c3a9d9e 45ff28360faa4ea9a15ea363a98a9ed1] Inject data image=rbd:automation-cinder/2e01db14-fbe5-49b8-b296-3b39b547ad1e_disk:id=automation-cinder:conf=/etc/ceph/ceph.conf key=None net=None metadata={u'hello': u'world'} admin_password=<SANITIZED> files=[(u'/test.txt', 'This is a test file.')] partition=-1 use_cow=True inject_data /usr/lib/python2.7/site-packages/nova/virt/disk/api.py:345
2015-01-10 19:06:24.590 15202 ERROR nova.virt.libvirt.driver [req-e8fc0308-ae54-4e27-9624-8369b947b5b6 d7413f7fcaf94ad5a6f513859c3a9d9e 45ff28360faa4ea9a15ea363a98a9ed1] [instance: 2e01db14-fbe5-49b8-b296-3b39b547ad1e] Error injecting data into image d04c4d17-d006-4bec-82e6-5f94edd12783 ([Errno 2] No such file or directory: 'rbd:automation-cinder/2e01db14-fbe5-49b8-b296-3b39b547ad1e_disk:id=automation-cinder:conf=/etc/ceph/ceph.conf')
2015-01-10 19:06:24.590 15202 ERROR nova.compute.manager [req-e8fc0308-ae54-4e27-9624-8369b947b5b6 d7413f7fcaf94ad5a6f513859c3a9d9e 45ff28360faa4ea9a15ea363a98a9ed1] [instance: 2e01db14-fbe5-49b8-b296-3b39b547ad1e] Instance failed to spawn
2015-01-10 19:06:24.590 15202 TRACE nova.compute.manager [instance: 2e01db14-fbe5-49b8-b296-3b39b547ad1e] Traceback (most recent call last):
2015-01-10 19:06:24.590 15202 TRACE nova.compute.manager [instance: 2e01db14-fbe5-49b8-b296-3b39b547ad1e] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1744, in _spawn
2015-01-10 19:06:24.590 15202 TRACE nova.compute.manager [instance: 2e01db14-fbe5-49b8-b296-3b39b547ad1e] block_device_info)
2015-01-10 19:06:24.590 15202 TRACE nova.compute.manager [instance: 2e01db14-fbe5-49b8-b296-3b39b547ad1e] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2316, in spawn
2015-01-10 19:06:24.590 15202 TRACE nova.compute.manager [instance: 2e01db14-fbe5-49b8-b296-3b39b547ad1e] admin_pass=admin_password)
2015-01-10 19:06:24.590 15202 TRACE nova.compute.manager [instance: 2e01db14-fbe5-49b8-b296-3b39b547ad1e] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2774, in _create_image
2015-01-10 19:06:24.590 15202 TRACE nova.compute.manager [instance: 2e01db14-fbe5-49b8-b296-3b39b547ad1e] instance, network_info, admin_pass, files, suffix)
2015-01-10 19:06:24.590 15202 TRACE nova.compute.manager [instance: 2e01db14-fbe5-49b8-b296-3b39b547ad1e] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2614, in _inject_data
2015-01-10 19:06:24.590 15202 TRACE nova.compute.manager [instance: 2e01db14-fbe5-49b8-b296-3b39b547ad1e] instance=instance)
2015-01-10 19:06:24.590 15202 TRACE nova.compute.manager [instance: 2e01db14-fbe5-49b8-b296-3b39b547ad1e] File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__
2015-01-10 19:06:24.590 15202 TRACE nova.compute.manager [instance: 2e01db14-fbe5-49b8-b296-3b39b547ad1e] six.reraise(self.type_, self.value, self.tb)
2015-01-10 19:06:24.590 15202 TRACE nova.compute.manager [instance: 2e01db14-fbe5-49b8-b296-3b39b547ad1e] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2608, in _inject_data
2015-01-10 19:06:24.590 15202 TRACE nova.compute.manager [instance: 2e01db14-fbe5-49b8-b296-3b39b547ad1e] mandatory=('files',))
2015-01-10 19:06:24.590 15202 TRACE nova.compute.manager [instance: 2e01db14-fbe5-49b8-b296-3b39b547ad1e] File "/usr/lib/python2.7/site-packages/nova/virt/disk/api.py", line 351, in inject_data
2015-01-10 19:06:24.590 15202 TRACE nova.compute.manager [instance: 2e01db14-fbe5-49b8-b296-3b39b547ad1e] os.stat(image)
2015-01-10 19:06:24.590 15202 TRACE nova.compute.manager [instance: 2e01db14-fbe5-49b8-b296-3b39b547ad1e] OSError: [Errno 2] No such file or directory: 'rbd:automation-cinder/2e01db14-fbe5-49b8-b296-3b39b547ad1e_disk:id=automation-cinder:conf=/etc/ceph/ceph.conf'
2015-01-10 19:06:24.590 15202 TRACE nova.compute.manager [instance: 2e01db14-fbe5-49b8-b296-3b39b547ad1e]
2015-01-10 19:06:24.591 15202 DEBUG nova.compute.claims [req-e8fc0308-ae54-4e27-9624-8369b947b5b6 d7413f7fcaf94ad5a6f513859c3a9d9e 45ff28360faa4ea9a15ea363a98a9ed1] [instance: 2e01db14-fbe5-49b8-b296-3b39b547ad1e] Aborting claim: [Claim: 1024 MB memory, 4 GB disk, 2 VCPUS] abort /usr/lib/python2.7/site-packages/nova/compute/claims.py:113
2015-01-10 19:06:24.591 15202 DEBUG nova.openstack.common.lockutils [req-e8fc0308-ae54-4e27-9624-8369b947b5b6 d7413f7fcaf94ad5a6f513859c3a9d9e 45ff28360faa4ea9a15ea363a98a9ed1] Got semaphore "compute_resources" lock /usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:168
2015-01-10 19:06:24.591 15202 DEBUG nova.openstack.common.lockutils [req-e8fc0308-ae54-4e27-9624-8369b947b5b6 d7413f7fcaf94ad5a6f513859c3a9d9e 45ff28360faa4ea9a15ea363a98a9ed1] Got semaphore / lock "abort_instance_claim" inner /usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:248
2015-01-10 19:06:24.622 15202 DEBUG nova.openstack.common.lockutils [req-e8fc0308-ae54-4e27-9624-8369b947b5b6 d7413f7fcaf94ad5a6f513859c3a9d9e 45ff28360faa4ea9a15ea363a98a9ed1] Semaphore / lock released "abort_instance_claim" inner /usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:252
2015-01-10 19:06:24.639 15202 DEBUG nova.compute.utils [req-e8fc0308-ae54-4e27-9624-8369b947b5b6 d7413f7fcaf94ad5a6f513859c3a9d9e 45ff28360faa4ea9a15ea363a98a9ed1] [instance: 2e01db14-fbe5-49b8-b296-3b39b547ad1e] [Errno 2] No such file or directory: 'rbd:automation-cinder/2e01db14-fbe5-49b8-b296-3b39b547ad1e_disk:id=automation-cinder:conf=/etc/ceph/ceph.conf' notify_about_instance_usage /usr/lib/python2.7/site-packages/nova/compute/utils.py:336

Tags: drivers rbd
Revision history for this message
Benny Kopilov (bkopilov) wrote :
Revision history for this message
Benny Kopilov (bkopilov) wrote :

nova-api.log

Mike Perez (thingee)
tags: added: drivers rbd
Revision history for this message
Benny Kopilov (bkopilov) wrote :

backend configuration:
 "cinder": {
        "DEFAULT": {
            "volume_driver": "cinder.volume.drivers.rbd.RBDDriver",
            "rbd_user": "automation-cinder",
            "rbd_pool": "automation-cinder",
            "rbd_ceph_conf": "/etc/ceph/ceph.conf",
            "rbd_flatten_volume_from_snapshot": "false",
            "rbd_max_clone_depth": "5",
            "glance_api_version": "2",
            "backup_driver": "cinder.backup.drivers.ceph",
            "backup_ceph_conf": "/etc/ceph/ceph.conf",
            "backup_ceph_user": "automation-cinder-backup",
            "backup_ceph_pool": "automation-cinder-backup",
            "backup_ceph_chunk_size": "134217728",
            "backup_ceph_stripe_unit": "0",
            "backup_ceph_stripe_count": "0",
            "restore_discard_excess_bytes": "true"
        }
    },
    "nova": {
        "DEFAULT": {
            "libvirt_images_type": "rbd",
            "libvirt_images_rbd_pool": "automation-cinder",
            "libvirt_images_rbd_ceph_conf": "/etc/ceph/ceph.conf",
            "libvirt_inject_password": "false",
            "libvirt_inject_key": "false",
            "libvirt_inject_partition": "-2",
            "rbd_user": "automation-cinder",
            "libvirt_live_migration_flag": "VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST"
        }
    },
    "glance": {
        "DEFAULT": {
            "default_store": "rbd",
            "rbd_store_user": "automation-glance",
            "rbd_store_pool": "automation-glance",
            "show_image_direct_url": "True",
            "rbd_store_ceph_conf": "/etc/ceph/ceph.conf",
            "show_image_direct_url": "True",
            "rbd_store_chunk_size": "8"
        }

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.