cinder-scheduler fails to transmit the image_id parameter to cinder-volume

Bug #1212710 reported by Francois Deppierraz
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Cinder
Fix Released
High
Avishay Traeger

Bug Description

This break the create volume from image feature with RBD backend and perhaps other backends as well?

On a grizzly setup with both glance and cinder configured with RBD
backend, I cannot find a way to create a new volume from a (raw) image.
The resulting volume is correctly created but remains full of NULL bytes.

$ glance show 92af7175-9478-42c4-8ed0-b336362ff3f7
URI: https://example.com:9292/v1/images/92af7175-9478-42c4-8ed0-b336362ff3f7
Id: 92af7175-9478-42c4-8ed0-b336362ff3f7
Public: Yes
Protected: No
Name: precise-server-cloudimg-amd64.raw
Status: active
Size: 2147483648
Disk format: raw
Container format: bare
Minimum Ram Required (MB): 0
Minimum Disk Required (GB): 0
Owner: 8226a848c4eb43e69b1a0f00c582e31f
Created at: 2013-08-11T07:22:13
Updated at: 2013-08-11T07:29:10
$ cinder create 10 --image-id 92af7175-9478-42c4-8ed0-b336362ff3f7
--display-name boot-from-volume

In the following logs, cinder-scheduler gets the correct image_id
parameter but cinder-volume gets None instead which looks weird.

==> cinder-scheduler.log <==
2013-08-12 00:34:23 DEBUG [cinder.openstack.common.rpc.amqp] received
{u'_context_roles': [u'Member', u'_member_', u'admin'],
u'_context_request_id': u'req-a0539c82-8777-4dd3-a6ce-87d5d37120ac',
u'_context_quota_class': None, u'_unique_id':
u'5e5eca077ae149a6a64b1760bc55cc55', u'_context_read_deleted': u'no',
u'args': {u'request_spec': {u'volume_id':
u'79e98020-555c-4179-97ff-867a79a37160', u'volume_properties':
{u'status': u'creating', u'volume_type_id': None, u'display_name':
u'boot-from-volume', u'availability_zone': u'nova', u'attach_status':
u'detached', u'source_volid': None, u'metadata': {}, u'volume_metadata':
[], u'display_description': None, u'snapshot_id': None, u'user_id':
u'01d6651d807649718e01be2999b11af0', u'project_id':
u'8226a848c4eb43e69b1a0f00c582e31f', u'id':
u'79e98020-555c-4179-97ff-867a79a37160', u'size': 10}, u'
volume_type': {}, u'image_id': u'92af7175-9478-42c4-8ed0-b336362ff3f7',
u'source_volid': None, u'snapshot_id': None}, u'volume_id':
u'79e98020-555c-4179-97ff-867a79a37160', u'filter_properties': {},
u'topic': u'cinder-volume', u'image_id':
u'92af7175-9478-42c4-8ed0-b336362ff3f7', u'snapshot_id': None},
u'_context_tenant': u'8226a848c4eb43e69b1a0f00c582e31f',
u'_context_auth_token': '<SANITIZED>', u'_context_is_admin': False,
u'version': u'1.2', u'_context_project_id':
u'8226a848c4eb43e69b1a0f00c582e31f',
u'_context_timestamp': u'2013-08-11T22:34:22.569488', u'_context_user':
u'01d6651d807649718e01be2999b11af0', u'_context_user_id':
u'01d6651d807649718e01be2999b11af0', u'm
ethod': u'create_volume', u'_context_remote_address': u'1.2.3.4'}
2013-08-12 00:34:23 DEBUG [cinder.openstack.common.rpc.amqp] unpacked
context: {'user_id': u'01d6651d807649718e01be2999b11af0', 'roles':
[u'Member', u'_member_', u'adm
in'], 'timestamp': u'2013-08-11T22:34:22.569488', 'auth_token':
'<SANITIZED>', 'remote_address': u'1.2.3.4', 'quota_class': None,
'is_admin': False, 'user': u'01d66
51d807649718e01be2999b11af0', 'request_id':
u'req-a0539c82-8777-4dd3-a6ce-87d5d37120ac', 'project_id':
u'8226a848c4eb43e69b1a0f00c582e31f', 'read_deleted': u'no', 'tenant
': u'8226a848c4eb43e69b1a0f00c582e31f'}
2013-08-12 00:34:23 DEBUG [cinder.openstack.common.rpc.amqp] Making
asynchronous cast on cinder-volume.sun2-test...
2013-08-12 00:34:23 DEBUG [cinder.openstack.common.rpc.amqp]
UNIQUE_ID is b3bc44f910054661a7a532bd6bfbe5bb.

==> cinder-volume.log <==
2013-08-12 00:34:23 DEBUG [cinder.openstack.common.rpc.amqp] received
{u'_context_roles': [u'Member', u'_member_', u'admin'],
u'_context_request_id': u'req-a0539c82-8777-4dd3-a6ce-87d5d37120ac',
u'_context_quota_class': None, u'_unique_id':
u'b3bc44f910054661a7a532bd6bfbe5bb', u'args': {u'request_spec': None,
u'volume_id': u'79e98020-555c-4179-97ff-867a79a37160',
u'allow_reschedule': True, u'filter_properties':
u'92af7175-9478-42c4-8ed0-b336362ff3f7', u'source_volid': None,
u'image_id': None, u'snapshot_id': None}, u'_context_tenant':
u'8226a848c4eb43e69b1a0f00c582e31f', u'_context_auth_token':
'<SANITIZED>', u'_context_timestamp': u'2013-08-11T22:34:22.569488',
u'_context_is_admin': False, u'version': u'1.4', u'_context_project_id':
u'8226a848c4eb43e69b1a0f00c582e31f', u'_context_user':
u'01d6651d807649718e01be2999b11af0', u'_context_
read_deleted': u'no', u'_context_user_id':
u'01d6651d807649718e01be2999b11af0', u'method': u'create_volume',
u'_context_remote_address': u'1.2.3.4'}
2013-08-12 00:34:23 DEBUG [cinder.openstack.common.rpc.amqp] unpacked
context: {'user_id': u'01d6651d807649718e01be2999b11af0', 'roles':
[u'Member', u'_member_', u'admin'], 'timestamp':
u'2013-08-11T22:34:22.569488', 'auth_token': '<SANITIZED>',
'remote_address': u'1.2.3.4', 'quota_class': None, 'is_admin': False,
'user': u'01d6651d807649718e01be2999b11af0', 'request_id':
u'req-a0539c82-8777-4dd3-a6ce-87d5d37120ac', 'project_id':
u'8226a848c4eb43e69b1a0f00c582e31f', 'read_deleted': u'no', 'tenant
': u'8226a848c4eb43e69b1a0f00c582e31f'}
2013-08-12 00:34:23 DEBUG [cinder.volume.manager] volume
volume-79e98020-555c-4179-97ff-867a79a37160: creating lv of size 10G
2013-08-12 00:34:23 INFO [cinder.volume.manager] volume
volume-79e98020-555c-4179-97ff-867a79a37160: creating
2013-08-12 00:34:23 DEBUG [cinder.utils] Running cmd (subprocess):
rbd --help
2013-08-12 00:34:23 DEBUG [cinder.utils] Running cmd (subprocess):
rbd create --pool volumes --size 10240
volume-79e98020-555c-4179-97ff-867a79a37160 --new-format
2013-08-12 00:34:23 DEBUG [cinder.volume.manager] volume
volume-79e98020-555c-4179-97ff-867a79a37160: creating export
2013-08-12 00:34:24 INFO [cinder.volume.manager] volume
volume-79e98020-555c-4179-97ff-867a79a37160: created successfully
2013-08-12 00:34:24 INFO [cinder.volume.manager] Clear capabilities

cinder.conf:

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rabbit_host=127.0.0.1
sql_connection=mysql://cinder:cinder@127.0.0.1/cinder?charset=utf8
api_paste_config=/etc/cinder/api-paste.ini
debug=True
rabbit_userid=nova
osapi_volume_listen=0.0.0.0
rabbit_virtual_host=/
scheduler_driver=cinder.scheduler.simple.SimpleScheduler
rabbit_hosts=127.0.0.1:5672
rabbit_ha_queues=False
rabbit_password=secret
rabbit_port=5672
rpc_backend=cinder.openstack.common.rpc.impl_kombu
sql_idle_timeout=3600
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_user=volumes
max_gigabytes=25000
rbd_pool=volumes
rbd_secret_uuid=XXXX
glance_api_version=2

glance-api.conf:

[DEFAULT]
verbose = True
debug = True
default_store = rbd
bind_host = 0.0.0.0
bind_port = 9292
log_file = /var/log/glance/api.log
backlog = 4096
sql_connection = mysql://glance:glance@127.0.0.1/glance
sql_idle_timeout = 3600
workers = 2
show_image_direct_url = True
registry_host = 0.0.0.0
registry_port = 9191
registry_client_protocol = http
notifier_strategy = noop
rabbit_host = localhost
rabbit_port = 5672
rabbit_use_ssl = false
rabbit_userid = guest
rabbit_password = guest
rabbit_virtual_host = /
rabbit_notification_exchange = glance
rabbit_notification_topic = notifications
rabbit_durable_queues = False
filesystem_store_datadir = /var/lib/glance/images/
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_user = images
rbd_store_pool = images
rbd_store_chunk_size = 8
delayed_delete = False
scrub_time = 43200
scrubber_datadir = /var/lib/glance/scrubber
image_cache_dir = /var/lib/glance/image-cache/
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = services
admin_user = glance
admin_password = secret
[paste_deploy]
flavor=keystone+cachemanagement

Revision history for this message
Francois Deppierraz (francois-ctrlaltdel) wrote :

This issue could be solved with the following hack:

diff --git a/cinder/scheduler/simple.py b/cinder/scheduler/simple.py
index 5329d6f..43590f3 100644
--- a/cinder/scheduler/simple.py
+++ b/cinder/scheduler/simple.py
@@ -65,8 +65,10 @@ class SimpleScheduler(chance.ChanceScheduler):
             self.volume_rpcapi.create_volume(context,
                                              updated_volume,
                                              host,
- snapshot_id,
- image_id)
+ None,
+ None,
+ snapshot_id=snapshot_id,
+ image_id=image_id)
             return None

         results = db.service_get_all_volume_sorted(elevated)
@@ -84,8 +86,10 @@ class SimpleScheduler(chance.ChanceScheduler):
                 self.volume_rpcapi.create_volume(context,
                                                  updated_volume,
                                                  service['host'],
- snapshot_id,
- image_id)
+ None,
+ None,
+ snapshot_id=snapshot_id,
+ image_id=image_id)
                 return None
         msg = _("Is the appropriate service running?")
         raise exception.NoValidHost(reason=msg)

A quick grep show that the "chance" scheduler is probably impacted as well.

$ grep -A5 self.volume_rpcapi.create_volume *
chance.py: self.volume_rpcapi.create_volume(context, updated_volume, host,
chance.py- snapshot_id, image_id)
[...]

Changed in cinder:
status: New → Confirmed
importance: Undecided → Medium
assignee: nobody → Avishay Traeger (avishay-il)
importance: Medium → High
Revision history for this message
Avishay Traeger (avishay-il) wrote :

This bug is still present in Havana. Will fix in Havana and backport.

Revision history for this message
Avishay Traeger (avishay-il) wrote :

Thanks for the bug report!

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (master)

Fix proposed to branch: master
Review: https://review.openstack.org/47896

Changed in cinder:
status: Confirmed → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to cinder (master)

Reviewed: https://review.openstack.org/47896
Committed: http://github.com/openstack/cinder/commit/d5cd6528f361979b073aabd036be0d28dc1c4b95
Submitter: Jenkins
Branch: master

commit d5cd6528f361979b073aabd036be0d28dc1c4b95
Author: Avishay Traeger <email address hidden>
Date: Mon Sep 23 21:18:56 2013 +0300

    Fix volume_rpcapi calls for chance/simple scheds

    Change the chance and simple schedulers to pass the snapshot_id and
    image_id parameters correctly volume_rpcapi, so that their values reach
    cinder-volume.

    Change-Id: I0abbca1fa0445c5233387a0f17363fc092d39b88
    Closes-Bug: #1212710

Changed in cinder:
status: In Progress → Fix Committed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (master)

Fix proposed to branch: master
Review: https://review.openstack.org/48178

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to cinder (master)

Reviewed: https://review.openstack.org/48178
Committed: http://github.com/openstack/cinder/commit/c77868f44b1e72be2b65f697a1f0f3b32126e581
Submitter: Jenkins
Branch: master

commit c77868f44b1e72be2b65f697a1f0f3b32126e581
Author: Mike Perez <email address hidden>
Date: Tue Sep 24 20:55:04 2013 -0700

    Pass correct args to vol_rpc create_volume calls

    In the chance and simple scheduler, create volume was originally using
    snapshot_id and image_id for request_spec and filter_properties. This
    corrects that by passing the correct arguments and keyword arguments to
    create_volume.

    Change-Id: Icbcfbfb28f36e1f75519bf5ad6fcbcc12a9b4ec1
    Closes-Bug: #1212710

Thierry Carrez (ttx)
Changed in cinder:
milestone: none → havana-rc1
status: Fix Committed → Fix Released
Thierry Carrez (ttx)
Changed in cinder:
milestone: havana-rc1 → 2013.2
Revision history for this message
Jesse Pretorius (jesse-pretorius) wrote :

Has this fix been backported to grizzly?

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.