cinder-scheduler fails to transmit the image_id parameter to cinder-volume
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Fix Released
|
High
|
Avishay Traeger |
Bug Description
This break the create volume from image feature with RBD backend and perhaps other backends as well?
On a grizzly setup with both glance and cinder configured with RBD
backend, I cannot find a way to create a new volume from a (raw) image.
The resulting volume is correctly created but remains full of NULL bytes.
$ glance show 92af7175-
URI: https:/
Id: 92af7175-
Public: Yes
Protected: No
Name: precise-
Status: active
Size: 2147483648
Disk format: raw
Container format: bare
Minimum Ram Required (MB): 0
Minimum Disk Required (GB): 0
Owner: 8226a848c4eb43e
Created at: 2013-08-11T07:22:13
Updated at: 2013-08-11T07:29:10
$ cinder create 10 --image-id 92af7175-
--display-name boot-from-volume
In the following logs, cinder-scheduler gets the correct image_id
parameter but cinder-volume gets None instead which looks weird.
==> cinder-
2013-08-12 00:34:23 DEBUG [cinder.
{u'_context_roles': [u'Member', u'_member_', u'admin'],
u'_context_
u'_context_
u'5e5eca077ae14
u'args': {u'request_spec': {u'volume_id':
u'79e98020-
{u'status': u'creating', u'volume_type_id': None, u'display_name':
u'boot-
u'detached', u'source_volid': None, u'metadata': {}, u'volume_metadata':
[], u'display_
u'01d6651d80764
u'8226a848c4eb4
u'79e98020-
volume_type': {}, u'image_id': u'92af7175-
u'source_volid': None, u'snapshot_id': None}, u'volume_id':
u'79e98020-
u'topic': u'cinder-volume', u'image_id':
u'92af7175-
u'_context_tenant': u'8226a848c4eb4
u'_context_
u'version': u'1.2', u'_context_
u'8226a848c4eb4
u'_context_
u'01d6651d80764
u'01d6651d80764
ethod': u'create_volume', u'_context_
2013-08-12 00:34:23 DEBUG [cinder.
context: {'user_id': u'01d6651d80764
[u'Member', u'_member_', u'adm
in'], 'timestamp': u'2013-
'<SANITIZED>', 'remote_address': u'1.2.3.4', 'quota_class': None,
'is_admin': False, 'user': u'01d66
51d807649718e01
u'req-a0539c82-
u'8226a848c4eb4
': u'8226a848c4eb4
2013-08-12 00:34:23 DEBUG [cinder.
asynchronous cast on cinder-
2013-08-12 00:34:23 DEBUG [cinder.
UNIQUE_ID is b3bc44f91005466
==> cinder-volume.log <==
2013-08-12 00:34:23 DEBUG [cinder.
{u'_context_roles': [u'Member', u'_member_', u'admin'],
u'_context_
u'_context_
u'b3bc44f910054
u'volume_id': u'79e98020-
u'allow_
u'92af7175-
u'image_id': None, u'snapshot_id': None}, u'_context_tenant':
u'8226a848c4eb4
'<SANITIZED>', u'_context_
u'_context_
u'8226a848c4eb4
u'01d6651d80764
read_deleted': u'no', u'_context_
u'01d6651d80764
u'_context_
2013-08-12 00:34:23 DEBUG [cinder.
context: {'user_id': u'01d6651d80764
[u'Member', u'_member_', u'admin'], 'timestamp':
u'2013-
'remote_address': u'1.2.3.4', 'quota_class': None, 'is_admin': False,
'user': u'01d6651d80764
u'req-a0539c82-
u'8226a848c4eb4
': u'8226a848c4eb4
2013-08-12 00:34:23 DEBUG [cinder.
volume-
2013-08-12 00:34:23 INFO [cinder.
volume-
2013-08-12 00:34:23 DEBUG [cinder.utils] Running cmd (subprocess):
rbd --help
2013-08-12 00:34:23 DEBUG [cinder.utils] Running cmd (subprocess):
rbd create --pool volumes --size 10240
volume-
2013-08-12 00:34:23 DEBUG [cinder.
volume-
2013-08-12 00:34:24 INFO [cinder.
volume-
2013-08-12 00:34:24 INFO [cinder.
cinder.conf:
[DEFAULT]
rootwrap_config = /etc/cinder/
api_paste_confg = /etc/cinder/
iscsi_helper = tgtadm
volume_
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/
rabbit_
sql_connection=
api_paste_
debug=True
rabbit_userid=nova
osapi_volume_
rabbit_
scheduler_
rabbit_
rabbit_
rabbit_
rabbit_port=5672
rpc_backend=
sql_idle_
volume_
rbd_user=volumes
max_gigabytes=25000
rbd_pool=volumes
rbd_secret_
glance_
glance-api.conf:
[DEFAULT]
verbose = True
debug = True
default_store = rbd
bind_host = 0.0.0.0
bind_port = 9292
log_file = /var/log/
backlog = 4096
sql_connection = mysql:/
sql_idle_timeout = 3600
workers = 2
show_image_
registry_host = 0.0.0.0
registry_port = 9191
registry_
notifier_strategy = noop
rabbit_host = localhost
rabbit_port = 5672
rabbit_use_ssl = false
rabbit_userid = guest
rabbit_password = guest
rabbit_virtual_host = /
rabbit_
rabbit_
rabbit_
filesystem_
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_user = images
rbd_store_pool = images
rbd_store_
delayed_delete = False
scrub_time = 43200
scrubber_datadir = /var/lib/
image_cache_dir = /var/lib/
[keystone_
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = services
admin_user = glance
admin_password = secret
[paste_deploy]
flavor=
Changed in cinder: | |
status: | New → Confirmed |
importance: | Undecided → Medium |
assignee: | nobody → Avishay Traeger (avishay-il) |
importance: | Medium → High |
Changed in cinder: | |
milestone: | none → havana-rc1 |
status: | Fix Committed → Fix Released |
Changed in cinder: | |
milestone: | havana-rc1 → 2013.2 |
This issue could be solved with the following hack:
diff --git a/cinder/ scheduler/ simple. py b/cinder/ scheduler/ simple. py scheduler/ simple. py scheduler/ simple. py (chance. ChanceScheduler ):
self. volume_ rpcapi. create_ volume( context,
updated_ volume,
host, id=snapshot_ id,
index 5329d6f..43590f3 100644
--- a/cinder/
+++ b/cinder/
@@ -65,8 +65,10 @@ class SimpleScheduler
- snapshot_id,
- image_id)
+ None,
+ None,
+ snapshot_
+ image_id=image_id)
return None
results = db.service_ get_all_ volume_ sorted( elevated) (chance. ChanceScheduler ):
self. volume_ rpcapi. create_ volume( context,
updated_ volume,
service[ 'host'] , id=snapshot_ id,
return None NoValidHost( reason= msg)
@@ -84,8 +86,10 @@ class SimpleScheduler
- snapshot_id,
- image_id)
+ None,
+ None,
+ snapshot_
+ image_id=image_id)
msg = _("Is the appropriate service running?")
raise exception.
A quick grep show that the "chance" scheduler is probably impacted as well.
$ grep -A5 self.volume_ rpcapi. create_ volume * rpcapi. create_ volume( context, updated_volume, host,
chance.py: self.volume_
chance.py- snapshot_id, image_id)
[...]