[SRU] Acceleration cinder - glance with ceph not working

Bug #1816468 reported by Michal Arbet
12
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cinder
Fix Released
Undecided
Michal Arbet
OpenStack Compute (nova)
Fix Released
Medium
Edward Hope-Morley
Ubuntu Cloud Archive
Fix Released
High
Unassigned
Rocky
Fix Released
High
Unassigned
Stein
Fix Released
High
Unassigned
Train
Fix Released
High
Unassigned
cinder (Ubuntu)
Fix Released
High
Unassigned
Cosmic
Won't Fix
High
Unassigned
Disco
Fix Released
High
Unassigned
nova (Ubuntu)
Fix Released
High
Unassigned
Cosmic
Won't Fix
High
Unassigned
Disco
Fix Released
High
Unassigned
Eoan
Fix Released
High
Unassigned

Bug Description

[Impact]
For >= rocky (i.e. if using py3 packages) librados.cluster.get_fsid() is returning a binary string which means that the fsid can't be matched against a string version of the same value from glance when deciding whether to use an image that is stored in Ceph.

[Test Case]
* deploy openstack rocky (using p3 packages)
* deploy ceph and use for glance backend
* set
/etc/glance/glance-api.conf:show_multiple_locations = True
/etc/glance/glance-api.conf:show_image_direct_url = True
* upload image to glance
* attempt to boot an instance using this image
* confirm that instance booted properly and check that the image it booted from is a cow clone of the glance image by doing the following in ceph:

    rbd -p nova info <vm uuid>| grep parent:

* confirm that you see "parent: glance/<image uuid>@snap"

[Regression Potential]
None expected

[Other Info]
None expected.

----------------------------------------------------------------------------
When using cinder, glance with ceph, in a code is support for creating volumes from images INSIDE ceph environment as copy-on-write volume. This option is saving space in ceph cluster, and increase speed of instance spawning because volume is created directly in ceph. <= THIS IS NOT WORKING IN PY3

If this function is not enabled , image is copying to compute-host ..convert ..create volume, and upload to ceph ( which is time consuming of course ).

Problem is , that even if glance-cinder acceleration is turned-on , code is executed as when it is disabled, so ..the same as above , copy image , create volume, upload to ceph ... BUT it should create copy-on-write volume inside the ceph internally. <= THIS IS A BUG IN PY3

Glance config ( controller ):

[DEFAULT]
show_image_direct_url = true <= this has to be set to true to reproduce issue
workers = 7
transport_url = rabbit://openstack:openstack@openstack-db
[cors]
[database]
connection = mysql+pymysql://glance:Eew7shai@openstack-db:3306/glance
[glance_store]
stores = file,rbd
default_store = rbd
filesystem_store_datadir = /var/lib/glance/images
rbd_store_pool = images
rbd_store_user = images
rbd_store_ceph_conf = /etc/ceph/ceph.conf
[image_format]
[keystone_authtoken]
auth_url = http://openstack-ctrl:35357
project_name = service
project_domain_name = default
username = glance
user_domain_name = default
password = Eew7shai
www_authenticate_uri = http://openstack-ctrl:5000
auth_uri = http://openstack-ctrl:35357
cache = swift.cache
region_name = RegionOne
auth_type = password
[matchmaker_redis]
[oslo_concurrency]
lock_path = /var/lock/glance
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
flavor = keystone
[store_type_location_strategy]
[task]
[taskflow_executor]
[profiler]
enabled = true
trace_sqlalchemy = true
hmac_keys = secret
connection_string = redis://127.0.0.1:6379
trace_wsgi_transport = True
trace_message_store = True
trace_management_store = True

Cinder conf (controller) :
root@openstack-controller:/tmp# cat /etc/cinder/cinder.conf | grep -v '^#' | awk NF
[DEFAULT]
my_ip = 192.168.10.15
glance_api_servers = http://openstack-ctrl:9292
auth_strategy = keystone
enabled_backends = rbd
osapi_volume_workers = 7
debug = true
transport_url = rabbit://openstack:openstack@openstack-db
[backend]
[backend_defaults]
rbd_pool = volumes
rbd_user = volumes1
rbd_secret_uuid = b2efeb49-9844-475b-92ad-5df4a3e1300e
volume_driver = cinder.volume.drivers.rbd.RBDDriver
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinder:EeRe3ahx@openstack-db:3306/cinder
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_url = http://openstack-ctrl:35357
project_name = service
project_domain_name = default
username = cinder
user_domain_name = default
password = EeRe3ahx
www_authenticate_uri = http://openstack-ctrl:5000
auth_uri = http://openstack-ctrl:35357
cache = swift.cache
region_name = RegionOne
auth_type = password
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/lock/cinder
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[sample_remote_file_source]
[service_user]
[ssl]
[vault]
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm
[profiler]
enabled = true
trace_sqlalchemy = true
hmac_keys = secret
connection_string = redis://127.0.0.1:6379
trace_wsgi_transport = True
trace_message_store = True
trace_management_store = True
[rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = volumes1
image_volume_cache_enabled = True
volume_clear = zero
rbd_max_clone_depth = 5
rbd_flatten_volume_from_snapshot = False

cinder conf compute node :

root@openstack-compute2:~# cat /etc/cinder/cinder.conf | grep -v '^#' | awk NF
[DEFAULT]
my_ip = 192.168.10.6
glance_api_servers = http://openstack-ctrl:9292
auth_strategy = keystone
enabled_backends = rbd
debug = true
transport_url = rabbit://openstack:openstack@openstack-db
[backend]
[backend_defaults]
rbd_pool = volumes
rbd_user = volumes1
rbd_secret_uuid = b2efeb49-9844-475b-92ad-5df4a3e1300e
volume_driver = cinder.volume.drivers.rbd.RBDDriver
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinder:EeRe3ahx@openstack-db:3306/cinder
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_url = http://openstack-ctrl:5000
project_name = service
project_domain_name = default
username = cinder
user_domain_name = default
password = EeRe3ahx
www_authenticate_uri = http://openstack-ctrl:5000
auth_uri = http://openstack-ctrl:5000
cache = swift.cache
region_name = RegionOne
auth_type = password
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/lock/cinder
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[sample_remote_file_source]
[service_user]
[ssl]
[vault]
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm
[rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = volumes1

Cinder log :

root@openstack-compute2:~# cat /var/log/cinder/*
2019-02-18 15:57:29.903 29303 DEBUG cinder.volume.flows.manager.create_volume [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Volume reschedule parameters: True retry: {'num_attempts': 1, 'backends': ['openstack-compute2@rbd#RBD'], 'hosts': ['openstack-compute2@rbd#RBD']} get_flow /usr/lib/python3/dist-packages/cinder/volume/flows/manager/create_volume.py:1171
2019-02-18 15:57:29.921 29303 DEBUG oslo_db.sqlalchemy.engines [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python3/dist-packages/oslo_db/sqlalchemy/engines.py:308
2019-02-18 15:57:29.929 29303 INFO cinder.rpc [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Automatically selected cinder-backup objects version 1.37 as minimum service version.
2019-02-18 15:57:29.937 29303 INFO cinder.rpc [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Automatically selected cinder-backup RPC version 2.1 as minimum service version.
2019-02-18 15:57:29.951 29303 INFO cinder.rpc [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Automatically selected cinder-volume objects version 1.37 as minimum service version.
2019-02-18 15:57:29.960 29303 INFO cinder.rpc [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Automatically selected cinder-volume RPC version 3.16 as minimum service version.
2019-02-18 15:57:30.007 29303 DEBUG cinder.volume.manager [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Flow 'volume_create_manager' (18af88d8-7ea2-41c0-8554-b231a179f492) transitioned into state 'RUNNING' from state 'PENDING' _flow_receiver /usr/lib/python3/dist-packages/taskflow/listeners/logging.py:145
2019-02-18 15:57:30.011 29303 DEBUG cinder.volume.manager [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Task 'cinder.volume.flows.manager.create_volume.ExtractVolumeRefTask;volume:create' (f8656e1d-dd4c-4c83-9464-fc1376154d4a) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver /usr/lib/python3/dist-packages/taskflow/listeners/logging.py:194
2019-02-18 15:57:30.051 29303 DEBUG oslo_db.sqlalchemy.engines [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python3/dist-packages/oslo_db/sqlalchemy/engines.py:308
2019-02-18 15:57:30.082 29303 DEBUG cinder.volume.manager [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Task 'cinder.volume.flows.manager.create_volume.ExtractVolumeRefTask;volume:create' (f8656e1d-dd4c-4c83-9464-fc1376154d4a) transitioned into state 'SUCCESS' from state 'RUNNING' with result 'Volume(_name_id=None,admin_metadata={},attach_status='detached',availability_zone='nova',bootable=False,cluster=<?>,cluster_name=None,consistencygroup=<?>,consistencygroup_id=None,created_at=2019-02-18T15:57:29Z,deleted=False,deleted_at=None,display_description='',display_name='',ec2_id=None,encryption_key_id=None,glance_metadata=<?>,group=<?>,group_id=None,host='openstack-compute2@rbd#RBD',id=d9b22b30-cb59-488b-94bd-15eaf47fd704,launched_at=None,metadata={},migration_status=None,multiattach=False,previous_status=None,project_id='ba5ef70fd99642fdb75c9307c88b1164',provider_auth=None,provider_geometry=None,provider_id=None,provider_location=None,replication_driver_data=None,replication_extended_status=None,replication_status=None,scheduled_at=2019-02-18T15:57:30Z,service_uuid=None,shared_targets=True,size=20,snapshot_id=None,snapshots=<?>,source_volid=None,status='creating',terminated_at=None,updated_at=2019-02-18T15:57:30Z,user_id='664034a641e64922a1cc253e2dc39e76',volume_attachment=<?>,volume_type=<?>,volume_type_id=None)' _task_receiver /usr/lib/python3/dist-packages/taskflow/listeners/logging.py:183
2019-02-18 15:57:30.086 29303 DEBUG cinder.volume.manager [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Task 'cinder.volume.flows.manager.create_volume.OnFailureRescheduleTask;volume:create' (c8a584b1-5712-4043-a845-f3c24cfaa9be) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver /usr/lib/python3/dist-packages/taskflow/listeners/logging.py:194
2019-02-18 15:57:30.088 29303 DEBUG cinder.volume.manager [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Task 'cinder.volume.flows.manager.create_volume.OnFailureRescheduleTask;volume:create' (c8a584b1-5712-4043-a845-f3c24cfaa9be) transitioned into state 'SUCCESS' from state 'RUNNING' with result 'None' _task_receiver /usr/lib/python3/dist-packages/taskflow/listeners/logging.py:183
2019-02-18 15:57:30.090 29303 DEBUG cinder.volume.manager [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Task 'cinder.volume.flows.manager.create_volume.ExtractVolumeSpecTask;volume:create' (d9cadc4f-5b28-48c9-bd2f-a877b0f04294) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver /usr/lib/python3/dist-packages/taskflow/listeners/logging.py:194
2019-02-18 15:57:30.311 29303 DEBUG cinder.volume.manager [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Task 'cinder.volume.flows.manager.create_volume.ExtractVolumeSpecTask;volume:create' (d9cadc4f-5b28-48c9-bd2f-a877b0f04294) transitioned into state 'SUCCESS' from state 'RUNNING' with result '{'image_location': ('rbd://b%27868c2b5d-12f1-4f3f-aa2a-5701a3bb1041%27/images/eb816518-fac2-48f2-8197-ca7a83c89ada/snap', None), 'type': 'image', 'status': 'creating', 'image_meta': {'owner': 'ba5ef70fd99642fdb75c9307c88b1164', 'visibility': 'shared', 'os_hash_value': '03fee9094d9720ddd51a7eb61755503a114752fbcefb10d1e5cf48a7f261ed9e36fe20e5c5bdf7e4a956910ebc6a0c0bd130b43e541d49b3b3362dd0d491851d', 'protected': False, 'properties': {}, 'os_hidden': False, 'checksum': 'abaeff277a4e2ce7dff253ad453312a5', 'min_disk': 0, 'updated_at': datetime.datetime(2019, 2, 18, 12, 3, 22, tzinfo=<iso8601.Utc>), 'created_at': datetime.datetime(2019, 2, 18, 12, 1, 4, tzinfo=<iso8601.Utc>), 'min_ram': 0, 'tags': [], 'virtual_size': None, 'direct_url': 'rbd://b%27868c2b5d-12f1-4f3f-aa2a-5701a3bb1041%27/images/eb816518-fac2-48f2-8197-ca7a83c89ada/snap', 'name': 'debian-9', 'container_format': 'bare', 'disk_format': 'raw', 'id': 'eb816518-fac2-48f2-8197-ca7a83c89ada', 'os_hash_algo': 'sha512', 'size': 2147483648, 'status': 'active', 'file': '/v2/images/eb816518-fac2-48f2-8197-ca7a83c89ada/file'}, 'volume_id': 'd9b22b30-cb59-488b-94bd-15eaf47fd704', 'volume_name': 'volume-d9b22b30-cb59-488b-94bd-15eaf47fd704', 'volume_size': 20, 'image_service': <cinder.image.glance.GlanceImageService object at 0x7f6db2eb64e0>, 'image_id': 'eb816518-fac2-48f2-8197-ca7a83c89ada'}' _task_receiver /usr/lib/python3/dist-packages/taskflow/listeners/logging.py:183
2019-02-18 15:57:30.314 29303 DEBUG cinder.volume.manager [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Task 'cinder.volume.flows.manager.create_volume.NotifyVolumeActionTask;volume:create, create.start' (9bc7e799-7151-4451-8ed8-efef11b72c99) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver /usr/lib/python3/dist-packages/taskflow/listeners/logging.py:194
2019-02-18 15:57:30.317 29303 DEBUG cinder.volume.manager [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Task 'cinder.volume.flows.manager.create_volume.NotifyVolumeActionTask;volume:create, create.start' (9bc7e799-7151-4451-8ed8-efef11b72c99) transitioned into state 'SUCCESS' from state 'RUNNING' with result 'None' _task_receiver /usr/lib/python3/dist-packages/taskflow/listeners/logging.py:183
2019-02-18 15:57:30.319 29303 DEBUG cinder.volume.manager [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Task 'cinder.volume.flows.manager.create_volume.CreateVolumeFromSpecTask;volume:create' (980d2ea5-5e92-4699-bb48-8846797b7baf) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver /usr/lib/python3/dist-packages/taskflow/listeners/logging.py:194
2019-02-18 15:57:30.320 29303 INFO cinder.volume.flows.manager.create_volume [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Volume d9b22b30-cb59-488b-94bd-15eaf47fd704: being created as image with specification: {'image_location': ('rbd://b%27868c2b5d-12f1-4f3f-aa2a-5701a3bb1041%27/images/eb816518-fac2-48f2-8197-ca7a83c89ada/snap', None), 'status': 'creating', 'image_meta': {'owner': 'ba5ef70fd99642fdb75c9307c88b1164', 'visibility': 'shared', 'os_hash_value': '03fee9094d9720ddd51a7eb61755503a114752fbcefb10d1e5cf48a7f261ed9e36fe20e5c5bdf7e4a956910ebc6a0c0bd130b43e541d49b3b3362dd0d491851d', 'protected': False, 'properties': {}, 'os_hidden': False, 'checksum': 'abaeff277a4e2ce7dff253ad453312a5', 'min_disk': 0, 'updated_at': datetime.datetime(2019, 2, 18, 12, 3, 22, tzinfo=<iso8601.Utc>), 'created_at': datetime.datetime(2019, 2, 18, 12, 1, 4, tzinfo=<iso8601.Utc>), 'min_ram': 0, 'tags': [], 'virtual_size': None, 'direct_url': 'rbd://b%27868c2b5d-12f1-4f3f-aa2a-5701a3bb1041%27/images/eb816518-fac2-48f2-8197-ca7a83c89ada/snap', 'name': 'debian-9', 'container_format': 'bare', 'disk_format': 'raw', 'id': 'eb816518-fac2-48f2-8197-ca7a83c89ada', 'os_hash_algo': 'sha512', 'size': 2147483648, 'status': 'active', 'file': '/v2/images/eb816518-fac2-48f2-8197-ca7a83c89ada/file'}, 'volume_name': 'volume-d9b22b30-cb59-488b-94bd-15eaf47fd704', 'volume_size': 20, 'image_service': <cinder.image.glance.GlanceImageService object at 0x7f6db2eb64e0>, 'image_id': 'eb816518-fac2-48f2-8197-ca7a83c89ada'}
2019-02-18 15:57:30.320 29303 DEBUG cinder.volume.flows.manager.create_volume [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Cloning d9b22b30-cb59-488b-94bd-15eaf47fd704 from image eb816518-fac2-48f2-8197-ca7a83c89ada at location ('rbd://b%27868c2b5d-12f1-4f3f-aa2a-5701a3bb1041%27/images/eb816518-fac2-48f2-8197-ca7a83c89ada/snap', None). _create_from_image /usr/lib/python3/dist-packages/cinder/volume/flows/manager/create_volume.py:894
2019-02-18 15:57:30.322 29303 DEBUG cinder.volume.drivers.rbd [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] connecting to volumes1@ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:336
2019-02-18 15:57:30.356 29303 DEBUG cinder.volume.drivers.rbd [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] rbd://b%27868c2b5d-12f1-4f3f-aa2a-5701a3bb1041%27/images/eb816518-fac2-48f2-8197-ca7a83c89ada/snap is in a different ceph cluster. _is_cloneable /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:1320
2019-02-18 15:57:48.356 29303 DEBUG cinder.image.image_utils [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Image fetch details: dest /var/lib/cinder/conversion/tmp2tkzwyhaopenstack-compute2@rbd, size 2048.00 MB, duration 18.00 sec fetch /usr/lib/python3/dist-packages/cinder/image/image_utils.py:392
2019-02-18 15:57:48.356 29303 INFO cinder.image.image_utils [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Image download 2048.00 MB at 113.79 MB/s
2019-02-18 15:57:48.360 29303 DEBUG oslo_service.periodic_task [req-f4ac235a-987e-4892-9b2b-d624feec45b4 - - - - -] Running periodic task VolumeManager.publish_service_capabilities run_periodic_tasks /usr/lib/python3/dist-packages/oslo_service/periodic_task.py:215
2019-02-18 15:57:48.361 29303 DEBUG cinder.volume.drivers.rbd [req-f4ac235a-987e-4892-9b2b-d624feec45b4 - - - - -] connecting to volumes1@ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:336
2019-02-18 15:57:48.393 29303 DEBUG cinder.volume.drivers.rbd [req-f4ac235a-987e-4892-9b2b-d624feec45b4 - - - - -] connecting to volumes1@ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:336
2019-02-18 15:57:48.430 29303 DEBUG cinder.volume.drivers.rbd [req-f4ac235a-987e-4892-9b2b-d624feec45b4 - - - - -] connecting to volumes1@ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:336
2019-02-18 15:57:48.590 29303 DEBUG oslo_concurrency.processutils [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C qemu-img info /var/lib/cinder/conversion/tmp2tkzwyhaopenstack-compute2@rbd execute /usr/lib/python3/dist-packages/oslo_concurrency/processutils.py:372
2019-02-18 15:57:48.765 29303 DEBUG cinder.manager [req-f4ac235a-987e-4892-9b2b-d624feec45b4 - - - - -] Notifying Schedulers of capabilities ... _publish_service_capabilities /usr/lib/python3/dist-packages/cinder/manager.py:194
2019-02-18 15:57:49.026 29303 DEBUG oslo_concurrency.processutils [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C qemu-img info /var/lib/cinder/conversion/tmp2tkzwyhaopenstack-compute2@rbd" returned: 0 in 0.436s execute /usr/lib/python3/dist-packages/oslo_concurrency/processutils.py:409
2019-02-18 15:57:49.027 29303 DEBUG cinder.image.image_utils [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Temporary image eb816518-fac2-48f2-8197-ca7a83c89ada is fetched for user 664034a641e64922a1cc253e2dc39e76. fetch /usr/lib/python3/dist-packages/cinder/image/image_utils.py:820
2019-02-18 15:57:49.268 29303 DEBUG oslo_concurrency.processutils [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C qemu-img info /var/lib/cinder/conversion/tmp2tkzwyhaopenstack-compute2@rbd execute /usr/lib/python3/dist-packages/oslo_concurrency/processutils.py:372
2019-02-18 15:57:49.699 29303 DEBUG oslo_concurrency.processutils [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C qemu-img info /var/lib/cinder/conversion/tmp2tkzwyhaopenstack-compute2@rbd" returned: 0 in 0.431s execute /usr/lib/python3/dist-packages/oslo_concurrency/processutils.py:409
2019-02-18 15:57:49.700 29303 DEBUG cinder.volume.drivers.rbd [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] creating volume 'volume-d9b22b30-cb59-488b-94bd-15eaf47fd704' create_volume /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:793
2019-02-18 15:57:49.702 29303 DEBUG cinder.volume.drivers.rbd [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] connecting to volumes1@ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:336
2019-02-18 15:57:50.033 29303 DEBUG cinder.volume.flows.manager.create_volume [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Attempting download of eb816518-fac2-48f2-8197-ca7a83c89ada (('rbd://b%27868c2b5d-12f1-4f3f-aa2a-5701a3bb1041%27/images/eb816518-fac2-48f2-8197-ca7a83c89ada/snap', None)) to volume d9b22b30-cb59-488b-94bd-15eaf47fd704. _copy_image_to_volume /usr/lib/python3/dist-packages/cinder/volume/flows/manager/create_volume.py:513
2019-02-18 15:57:50.086 29303 DEBUG oslo_concurrency.processutils [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C qemu-img info /var/lib/cinder/conversion/tmp3rov4mxq execute /usr/lib/python3/dist-packages/oslo_concurrency/processutils.py:372
2019-02-18 15:57:50.495 29303 DEBUG oslo_concurrency.processutils [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C qemu-img info /var/lib/cinder/conversion/tmp3rov4mxq" returned: 0 in 0.409s execute /usr/lib/python3/dist-packages/oslo_concurrency/processutils.py:409
2019-02-18 15:57:50.497 29303 DEBUG oslo_concurrency.processutils [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C qemu-img info /var/lib/cinder/conversion/tmp2tkzwyhaopenstack-compute2@rbd execute /usr/lib/python3/dist-packages/oslo_concurrency/processutils.py:372
2019-02-18 15:57:50.915 29303 DEBUG oslo_concurrency.processutils [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C qemu-img info /var/lib/cinder/conversion/tmp2tkzwyhaopenstack-compute2@rbd" returned: 0 in 0.418s execute /usr/lib/python3/dist-packages/oslo_concurrency/processutils.py:409
2019-02-18 15:57:50.917 29303 DEBUG cinder.image.image_utils [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] eb816518-fac2-48f2-8197-ca7a83c89ada was raw, converting to raw fetch_to_volume_format /usr/lib/python3/dist-packages/cinder/image/image_utils.py:558
2019-02-18 15:57:50.919 29303 DEBUG oslo_concurrency.processutils [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf qemu-img convert -O raw -f raw /var/lib/cinder/conversion/tmp2tkzwyhaopenstack-compute2@rbd /var/lib/cinder/conversion/tmpiw6lpfyj execute /usr/lib/python3/dist-packages/oslo_concurrency/processutils.py:372
2019-02-18 15:57:52.968 29303 DEBUG oslo_concurrency.processutils [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf qemu-img convert -O raw -f raw /var/lib/cinder/conversion/tmp2tkzwyhaopenstack-compute2@rbd /var/lib/cinder/conversion/tmpiw6lpfyj" returned: 0 in 2.050s execute /usr/lib/python3/dist-packages/oslo_concurrency/processutils.py:409
2019-02-18 15:57:52.970 29303 DEBUG oslo_concurrency.processutils [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C qemu-img info /var/lib/cinder/conversion/tmp2tkzwyhaopenstack-compute2@rbd execute /usr/lib/python3/dist-packages/oslo_concurrency/processutils.py:372
2019-02-18 15:57:53.342 29303 DEBUG oslo_concurrency.processutils [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C qemu-img info /var/lib/cinder/conversion/tmp2tkzwyhaopenstack-compute2@rbd" returned: 0 in 0.372s execute /usr/lib/python3/dist-packages/oslo_concurrency/processutils.py:409
2019-02-18 15:57:53.344 29303 DEBUG cinder.image.image_utils [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Image conversion details: src /var/lib/cinder/conversion/tmp2tkzwyhaopenstack-compute2@rbd, size 2048.00 MB, duration 2.05 sec, destination /var/lib/cinder/conversion/tmpiw6lpfyj _convert_image /usr/lib/python3/dist-packages/cinder/image/image_utils.py:263
2019-02-18 15:57:53.344 29303 INFO cinder.image.image_utils [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Converted 2048.00 MB image at 998.52 MB/s
2019-02-18 15:57:53.345 29303 DEBUG cinder.volume.drivers.rbd [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] connecting to volumes1@ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:336
2019-02-18 15:57:53.635 29303 DEBUG cinder.volume.drivers.rbd [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] volume has no backup snaps _delete_backup_snaps /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:874
2019-02-18 15:57:53.636 29303 DEBUG cinder.volume.drivers.rbd [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Volume volume-d9b22b30-cb59-488b-94bd-15eaf47fd704 is not a clone. _get_clone_info /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:897
2019-02-18 15:57:53.663 29303 DEBUG cinder.volume.drivers.rbd [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] deleting rbd volume volume-d9b22b30-cb59-488b-94bd-15eaf47fd704 delete_volume /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:995
2019-02-18 15:57:54.843 29303 DEBUG oslo_concurrency.processutils [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Running cmd (subprocess): rbd import --pool volumes --order 22 /var/lib/cinder/conversion/tmpiw6lpfyj volume-d9b22b30-cb59-488b-94bd-15eaf47fd704 --new-format --id volumes1 --cluster ceph --conf /etc/ceph/ceph.conf execute /usr/lib/python3/dist-packages/oslo_concurrency/processutils.py:372
2019-02-18 15:58:48.369 29303 DEBUG oslo_service.periodic_task [req-f4ac235a-987e-4892-9b2b-d624feec45b4 - - - - -] Running periodic task VolumeManager.publish_service_capabilities run_periodic_tasks /usr/lib/python3/dist-packages/oslo_service/periodic_task.py:215
2019-02-18 15:58:48.370 29303 DEBUG cinder.volume.drivers.rbd [req-f4ac235a-987e-4892-9b2b-d624feec45b4 - - - - -] connecting to volumes1@ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:336
2019-02-18 15:58:48.412 29303 DEBUG cinder.volume.drivers.rbd [req-f4ac235a-987e-4892-9b2b-d624feec45b4 - - - - -] connecting to volumes1@ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:336
2019-02-18 15:58:48.446 29303 DEBUG cinder.volume.drivers.rbd [req-f4ac235a-987e-4892-9b2b-d624feec45b4 - - - - -] connecting to volumes1@ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:336
2019-02-18 15:58:48.763 29303 DEBUG cinder.manager [req-f4ac235a-987e-4892-9b2b-d624feec45b4 - - - - -] Notifying Schedulers of capabilities ... _publish_service_capabilities /usr/lib/python3/dist-packages/cinder/manager.py:194
2019-02-18 15:59:48.371 29303 DEBUG oslo_service.periodic_task [req-f4ac235a-987e-4892-9b2b-d624feec45b4 - - - - -] Running periodic task VolumeManager.publish_service_capabilities run_periodic_tasks /usr/lib/python3/dist-packages/oslo_service/periodic_task.py:215
2019-02-18 15:59:48.372 29303 DEBUG cinder.volume.drivers.rbd [req-f4ac235a-987e-4892-9b2b-d624feec45b4 - - - - -] connecting to volumes1@ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:336
2019-02-18 15:59:48.404 29303 DEBUG cinder.volume.drivers.rbd [req-f4ac235a-987e-4892-9b2b-d624feec45b4 - - - - -] connecting to volumes1@ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:336
2019-02-18 15:59:48.434 29303 DEBUG cinder.volume.drivers.rbd [req-f4ac235a-987e-4892-9b2b-d624feec45b4 - - - - -] connecting to volumes1@ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:336
2019-02-18 15:59:48.763 29303 DEBUG cinder.manager [req-f4ac235a-987e-4892-9b2b-d624feec45b4 - - - - -] Notifying Schedulers of capabilities ... _publish_service_capabilities /usr/lib/python3/dist-packages/cinder/manager.py:194
2019-02-18 16:00:00.674 29303 DEBUG oslo_concurrency.processutils [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] CMD "rbd import --pool volumes --order 22 /var/lib/cinder/conversion/tmpiw6lpfyj volume-d9b22b30-cb59-488b-94bd-15eaf47fd704 --new-format --id volumes1 --cluster ceph --conf /etc/ceph/ceph.conf" returned: 0 in 125.831s execute /usr/lib/python3/dist-packages/oslo_concurrency/processutils.py:409
2019-02-18 16:00:01.033 29303 DEBUG cinder.volume.drivers.rbd [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] connecting to volumes1@ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:336
2019-02-18 16:00:01.556 29303 DEBUG cinder.volume.flows.manager.create_volume [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Downloaded image eb816518-fac2-48f2-8197-ca7a83c89ada (('rbd://b%27868c2b5d-12f1-4f3f-aa2a-5701a3bb1041%27/images/eb816518-fac2-48f2-8197-ca7a83c89ada/snap', None)) to volume d9b22b30-cb59-488b-94bd-15eaf47fd704 successfully. _copy_image_to_volume /usr/lib/python3/dist-packages/cinder/volume/flows/manager/create_volume.py:557
2019-02-18 16:00:01.946 29303 DEBUG cinder.image.image_utils [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Temporary image eb816518-fac2-48f2-8197-ca7a83c89ada for user 664034a641e64922a1cc253e2dc39e76 is deleted. fetch /usr/lib/python3/dist-packages/cinder/image/image_utils.py:824
2019-02-18 16:00:01.948 29303 DEBUG cinder.volume.flows.manager.create_volume [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Marking volume d9b22b30-cb59-488b-94bd-15eaf47fd704 as bootable. _enable_bootable_flag /usr/lib/python3/dist-packages/cinder/volume/flows/manager/create_volume.py:476
2019-02-18 16:00:02.663 29303 DEBUG cinder.volume.flows.manager.create_volume [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Copying metadata from image eb816518-fac2-48f2-8197-ca7a83c89ada to d9b22b30-cb59-488b-94bd-15eaf47fd704. _handle_bootable_volume_glance_meta /usr/lib/python3/dist-packages/cinder/volume/flows/manager/create_volume.py:432
2019-02-18 16:00:02.664 29303 DEBUG cinder.volume.flows.manager.create_volume [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Creating volume glance metadata for volume d9b22b30-cb59-488b-94bd-15eaf47fd704 backed by image eb816518-fac2-48f2-8197-ca7a83c89ada with: {'image_id': 'eb816518-fac2-48f2-8197-ca7a83c89ada', 'image_name': 'debian-9', 'container_format': 'bare', 'disk_format': 'raw', 'checksum': 'abaeff277a4e2ce7dff253ad453312a5', 'min_disk': 0, 'size': 2147483648, 'min_ram': 0}. _capture_volume_image_metadata /usr/lib/python3/dist-packages/cinder/volume/flows/manager/create_volume.py:591
2019-02-18 16:00:04.186 29303 DEBUG cinder.volume.manager [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Task 'cinder.volume.flows.manager.create_volume.CreateVolumeFromSpecTask;volume:create' (980d2ea5-5e92-4699-bb48-8846797b7baf) transitioned into state 'SUCCESS' from state 'RUNNING' with result '{'image_location': ('rbd://b%27868c2b5d-12f1-4f3f-aa2a-5701a3bb1041%27/images/eb816518-fac2-48f2-8197-ca7a83c89ada/snap', None), 'status': 'creating', 'image_meta': {'owner': 'ba5ef70fd99642fdb75c9307c88b1164', 'visibility': 'shared', 'os_hash_value': '03fee9094d9720ddd51a7eb61755503a114752fbcefb10d1e5cf48a7f261ed9e36fe20e5c5bdf7e4a956910ebc6a0c0bd130b43e541d49b3b3362dd0d491851d', 'protected': False, 'properties': {}, 'os_hidden': False, 'checksum': 'abaeff277a4e2ce7dff253ad453312a5', 'min_disk': 0, 'updated_at': datetime.datetime(2019, 2, 18, 12, 3, 22, tzinfo=<iso8601.Utc>), 'created_at': datetime.datetime(2019, 2, 18, 12, 1, 4, tzinfo=<iso8601.Utc>), 'min_ram': 0, 'tags': [], 'virtual_size': None, 'direct_url': 'rbd://b%27868c2b5d-12f1-4f3f-aa2a-5701a3bb1041%27/images/eb816518-fac2-48f2-8197-ca7a83c89ada/snap', 'name': 'debian-9', 'container_format': 'bare', 'disk_format': 'raw', 'id': 'eb816518-fac2-48f2-8197-ca7a83c89ada', 'os_hash_algo': 'sha512', 'size': 2147483648, 'status': 'active', 'file': '/v2/images/eb816518-fac2-48f2-8197-ca7a83c89ada/file'}, 'volume_name': 'volume-d9b22b30-cb59-488b-94bd-15eaf47fd704', 'volume_size': 20, 'image_service': <cinder.image.glance.GlanceImageService object at 0x7f6db2eb64e0>, 'image_id': 'eb816518-fac2-48f2-8197-ca7a83c89ada'}' _task_receiver /usr/lib/python3/dist-packages/taskflow/listeners/logging.py:183
2019-02-18 16:00:04.188 29303 DEBUG cinder.volume.manager [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Task 'cinder.volume.flows.manager.create_volume.CreateVolumeOnFinishTask;volume:create, create.end' (afb50542-d5a4-4558-a6d7-6399ede97460) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver /usr/lib/python3/dist-packages/taskflow/listeners/logging.py:194
2019-02-18 16:00:04.258 29303 INFO cinder.volume.flows.manager.create_volume [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Volume volume-d9b22b30-cb59-488b-94bd-15eaf47fd704 (d9b22b30-cb59-488b-94bd-15eaf47fd704): created successfully
2019-02-18 16:00:04.260 29303 DEBUG cinder.volume.manager [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Task 'cinder.volume.flows.manager.create_volume.CreateVolumeOnFinishTask;volume:create, create.end' (afb50542-d5a4-4558-a6d7-6399ede97460) transitioned into state 'SUCCESS' from state 'RUNNING' with result 'None' _task_receiver /usr/lib/python3/dist-packages/taskflow/listeners/logging.py:183
2019-02-18 16:00:04.263 29303 DEBUG cinder.volume.manager [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Flow 'volume_create_manager' (18af88d8-7ea2-41c0-8554-b231a179f492) transitioned into state 'SUCCESS' from state 'RUNNING' _flow_receiver /usr/lib/python3/dist-packages/taskflow/listeners/logging.py:145
2019-02-18 16:00:04.294 29303 INFO cinder.volume.manager [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Created volume successfully.
2019-02-18 16:00:06.498 29303 DEBUG oslo_concurrency.processutils [req-c944f40c-94fd-4a0c-9640-6f3fe7b9f007 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Running cmd (subprocess): ceph mon dump --format=json --id volumes1 --cluster ceph --conf /etc/ceph/ceph.conf execute /usr/lib/python3/dist-packages/oslo_concurrency/processutils.py:372
2019-02-18 16:00:06.888 29303 DEBUG oslo_concurrency.processutils [req-c944f40c-94fd-4a0c-9640-6f3fe7b9f007 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] CMD "ceph mon dump --format=json --id volumes1 --cluster ceph --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3/dist-packages/oslo_concurrency/processutils.py:409
2019-02-18 16:00:06.890 29303 DEBUG cinder.volume.drivers.rbd [req-c944f40c-94fd-4a0c-9640-6f3fe7b9f007 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] connection data: {'driver_volume_type': 'rbd', 'data': {'cluster_name': 'ceph', 'secret_type': 'ceph', 'discard': True, 'keyring': None, 'hosts': ['185.21.196.128', '185.21.196.146', '185.21.196.163'], 'secret_uuid': 'b2efeb49-9844-475b-92ad-5df4a3e1300e', 'name': 'volumes/volume-d9b22b30-cb59-488b-94bd-15eaf47fd704', 'auth_username': 'volumes1', 'auth_enabled': True, 'volume_id': 'd9b22b30-cb59-488b-94bd-15eaf47fd704', 'ports': ['6789', '6789', '6789']}} initialize_connection /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:1286
2019-02-18 16:00:06.891 29303 INFO cinder.volume.manager [req-c944f40c-94fd-4a0c-9640-6f3fe7b9f007 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Initialize volume connection completed successfully.
2019-02-18 16:00:08.302 29303 DEBUG cinder.coordination [req-3e789701-ed1a-47a2-8525-3c7631cbbd46 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Lock "/var/lib/cinder/cinder-d9b22b30-cb59-488b-94bd-15eaf47fd704" acquired by "attach_volume" :: waited 0.002s _synchronized /usr/lib/python3/dist-packages/cinder/coordination.py:150
2019-02-18 16:00:09.481 29303 INFO cinder.volume.manager [req-3e789701-ed1a-47a2-8525-3c7631cbbd46 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Attaching volume d9b22b30-cb59-488b-94bd-15eaf47fd704 to instance 4eeefa1e-4382-4716-8364-029c47aa8b24 at mountpoint /dev/vda on host None.
2019-02-18 16:00:09.905 29303 INFO cinder.volume.manager [req-3e789701-ed1a-47a2-8525-3c7631cbbd46 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Attach volume completed successfully.
2019-02-18 16:00:09.906 29303 DEBUG cinder.coordination [req-3e789701-ed1a-47a2-8525-3c7631cbbd46 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] Lock "/var/lib/cinder/cinder-d9b22b30-cb59-488b-94bd-15eaf47fd704" released by "attach_volume" :: held 1.604s _synchronized /usr/lib/python3/dist-packages/cinder/coordination.py:162

 In a log above , this line is important in python3 ( STR issue ):

2019-02-18 15:57:30.356 29303 DEBUG cinder.volume.drivers.rbd [req-bdc7dc4f-6fca-4358-beb9-43d3c2000b42 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] rbd://b%27868c2b5d-12f1-4f3f-aa2a-5701a3bb1041%27/images/eb816518-fac2-48f2-8197-ca7a83c89ada/snap is in a different ceph cluster. _is_cloneable /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:1320

I've added this to code /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py , to function _is_cloneable to debug why:

    def _is_cloneable(self, image_location, image_meta):
        try:
            # MICHAL ARBET DEBUG
            LOG.debug("MICHAL ARBET Image Location : {}".format(image_location))
            LOG.debug("MICHAL ARBET TYPE Image Location : {}".format(type(image_location)))
            fsid, pool, image, snapshot = self._parse_location(image_location)
        except exception.ImageUnacceptable as e:
            LOG.debug('not cloneable: %s.', e)
            return False

        # MICHAL ARBET DEBUG
        self_get_fsid = self._get_fsid()
        LOG.debug("MICHAL ARBET fsid = {}".format(fsid))
        LOG.debug("MICHAL ARBET type fsid = {}".format(type(fsid)))
        LOG.debug("MICHAL ARBET self._get_fsid() = {}".format(self_get_fsid))
        LOG.debug("MICHAL ARBET type self._get_fsid() = {}".format(type(self_get_fsid)))
        if self_get_fsid != fsid:
            LOG.debug('%s is in a different ceph cluster.', image_location)
            return False

root@openstack-compute2:~# cat /var/log/cinder/* | grep MICHAL\ ARBET
2019-02-18 16:58:34.398 6973 DEBUG cinder.volume.drivers.rbd [req-b90a4a29-e66c-4704-bc44-1487cfff3924 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] MICHAL ARBET Image Location : rbd://b%27868c2b5d-12f1-4f3f-aa2a-5701a3bb1041%27/images/eb816518-fac2-48f2-8197-ca7a83c89ada/snap _is_cloneable /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:1314
2019-02-18 16:58:34.398 6973 DEBUG cinder.volume.drivers.rbd [req-b90a4a29-e66c-4704-bc44-1487cfff3924 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] MICHAL ARBET TYPE Image Location : <class 'str'> _is_cloneable /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:1315
2019-02-18 16:58:34.436 6973 DEBUG cinder.volume.drivers.rbd [req-b90a4a29-e66c-4704-bc44-1487cfff3924 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] MICHAL ARBET fsid = b'868c2b5d-12f1-4f3f-aa2a-5701a3bb1041' _is_cloneable /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:1323
2019-02-18 16:58:34.437 6973 DEBUG cinder.volume.drivers.rbd [req-b90a4a29-e66c-4704-bc44-1487cfff3924 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] MICHAL ARBET type fsid = <class 'str'> _is_cloneable /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:1324
2019-02-18 16:58:34.438 6973 DEBUG cinder.volume.drivers.rbd [req-b90a4a29-e66c-4704-bc44-1487cfff3924 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] MICHAL ARBET self._get_fsid() = b'868c2b5d-12f1-4f3f-aa2a-5701a3bb1041' _is_cloneable /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:1325
2019-02-18 16:58:34.438 6973 DEBUG cinder.volume.drivers.rbd [req-b90a4a29-e66c-4704-bc44-1487cfff3924 664034a641e64922a1cc253e2dc39e76 ba5ef70fd99642fdb75c9307c88b1164 - default default] MICHAL ARBET type self._get_fsid() = <class 'bytes'> _is_cloneable /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:1326

And above log is BUG in python3

CVE References

tags: added: ceph py3 rbd
Changed in cinder:
assignee: nobody → Michal Arbet (michalarbet)
Revision history for this message
Michal Arbet (michalarbet) wrote :
Download full text (8.5 KiB)

I think problem is between glance and ceph.

Glance image show is showing corrupted location in ceph:

root@openstack-controller:/tmp# openstack image show eb816518-fac2-48f2-8197-ca7a83c89ada

direct_url='rbd://b%27868c2b5d-12f1-4f3f-aa2a-5701a3bb1041%27/images/eb816518-fac2-48f2-8197-ca7a83c89ada/snap'

Above string is constructed from b'ID' but id is returned also with apostrophes from ceph , %27 is ' encoded

>>>>>>>>

+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum | abaeff277a4e2ce7dff253ad453312a5 |
| container_format | bare |
| created_at | 2019-02-18T12:01:04Z |
| disk_format | raw |
| file | /v2/images/eb816518-fac2-48f2-8197-ca7a83c89ada/file |
| id | eb816518-fac2-48f2-8197-ca7a83c89ada ...

Read more...

Revision history for this message
Michal Arbet (michalarbet) wrote :

michalarbet@ut-9:~$ python3
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> a = 'rbd://b%27868c2b5d-12f1-4f3f-aa2a-5701a3bb1041%27/images/eb816518-fac2-48f2-8197-ca7a83c89ada/snap'
>>> import urllib
>>> pieces = [urllib.parse.unquote(loc) for loc in a[len(prefix):].split('/')]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
NameError: name 'prefix' is not defined
>>> prefix='rbd://'
>>> pieces = [urllib.parse.unquote(loc) for loc in a[len(prefix):].split('/')]
>>> print(pieces)
["b'868c2b5d-12f1-4f3f-aa2a-5701a3bb1041'", 'images', 'eb816518-fac2-48f2-8197-ca7a83c89ada', 'snap']

Revision history for this message
Michal Arbet (michalarbet) wrote :

Tcpdump :

....Q.\%......".<.(.. openstack.cinder-scheduler........+.<.............application/json.utf-8..............{"oslo.version": "2.0", "oslo.message": "{\"_context_user\": null, \"_context_project_domain\": null, \"version\": \"3.5\", \"_context_show_deleted\": false, \"_context_project_name\": null, \"_context_domain_id\": null, \"args\": {\"service_name\": \"volume\", \"backend\": \"openstack-compute2@rbd\", \"timestamp\": \"2019-02-18T21:30:13.119600\", \"capabilities\": {\"allocated_capacity_gb\": 60, \"backend_state\": \"up\", \"thin_provisioning_support\": true, \"location_info\": \"ceph:/etc/ceph/ceph.conf:b'868c2b5d-12f1-4f3f-aa2a-5701a3bb1041':volumes1:volumes\", \"provisioned_capacity_gb\": 10, \"reserved_percentage\": 0, \"volume_backend_name\": \"RBD\", \"driver_version\": \"1.2.0\", \"replication_enabled\": false, \"storage_protocol\": \"ceph\", \"free_capacity_gb\": 51.0, \"max_over_subscription_ratio\": \"20.0\", \"multiattach\": false, \"vendor_name\": \"Open Source\", \"total_capacity_gb\": 55.54, \"filter_function\": null, \"goodness_function\": null}}, \"_context_read_deleted\": \"no\", \"_context_read_only\": false, \"_context_user_domain\": null, \"_context_roles\": [\"admin\"], \"_context_auth_token\": null, \"_context_global_request_id\": null, \"_context_is_admin_project\": true, \"_unique_id\": \"54caaebfbd98442bab2a04d59a189698\", \"_context_system_scope\": null, \"_context_project\": null, \"_context_service_catalog\": [], \"_context_timestamp\": \"2019-02-18T21:30:12.863207\", \"_context_request_id\": \"req-477b6603-75bb-4702-b1cc-1d1b4dbd0838\", \"_context_user_identity\": \"- - - - -\", \"_context_remote_address\": null, \"_context_domain\": null, \"_context_quota_class\": null, \"_context_project_id\": null, \"_context_resource_uuid\": null, \"_context_user_id\": null, \"method\": \"notify_service_capabilities\", \"_context_is_admin\": true, \"_context_tenant\": null}"}.

>>>>

\"location_info\": \"ceph:/etc/ceph/ceph.conf:b'868c2b5d-12f1-4f3f-aa2a-5701a3bb1041':volumes1:volumes\"

Revision history for this message
Michal Arbet (michalarbet) wrote :

Root cause is that librados get_fsid returning bytes in python3.

root@openstack-controller:~# ipython3
import radosPython 3.5.3 (default, Sep 27 2018, 17:25:39)
Type "copyright", "credits" or "license" for more information.

IPython 5.1.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.

In [1]: import rados

In [2]: from rados import Rados

In [3]: client = Rados(rados_id='admin', clustername='ceph', conffile='/etc/ceph/ceph.conf')

In [4]: client.connect()

In [5]: client.get_fsid()
Out[5]: b'868c2b5d-12f1-4f3f-aa2a-5701a3bb1041'

Revision history for this message
Michal Arbet (michalarbet) wrote :

Created bug in librados :

https://tracker.ceph.com/issues/38381

Changed in cinder:
status: New → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to cinder (master)

Reviewed: https://review.openstack.org/635195
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=ed84f34562da21a53b6d45611a5bcf9e3edd1e84
Submitter: Zuul
Branch: master

commit ed84f34562da21a53b6d45611a5bcf9e3edd1e84
Author: Michal Arbet <email address hidden>
Date: Wed Feb 6 16:11:28 2019 +0100

    Fix python3 compatibility of rbd get_fsid

    In python3 librados's get_fsid() function is represented as binary.
    Because of this, conditional where fsid is compared with fsid parsed
    from glance's direct url is not true, then cinder-glance
    acceleration is not working. This patch is fixing this.
    More informations are in closing bug.

    Closes-Bug: #1816468

    Change-Id: I69d0685ec845355329f771f27f8894d9e988ae35

Changed in cinder:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/cinder 14.0.0.0rc1

This issue was fixed in the openstack/cinder 14.0.0.0rc1 release candidate.

Revision history for this message
Edward Hope-Morley (hopem) wrote : Re: Acceleration cinder - glance with ceph not working

This also needs fixing in Nova (rbd imagebackend)

Changed in nova:
assignee: nobody → Edward Hope-Morley (hopem)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.opendev.org/663607

Changed in nova:
status: New → In Progress
Changed in cinder (Ubuntu Cosmic):
importance: Undecided → High
status: New → Triaged
Changed in cinder (Ubuntu Disco):
importance: Undecided → High
status: New → Triaged
Changed in cinder (Ubuntu Eoan):
importance: Undecided → High
status: New → Triaged
Changed in nova (Ubuntu Cosmic):
importance: Undecided → High
status: New → Triaged
Changed in nova (Ubuntu Disco):
importance: Undecided → High
status: New → Triaged
Changed in nova (Ubuntu Eoan):
importance: Undecided → High
status: New → Triaged
melanie witt (melwitt)
Changed in nova:
importance: Undecided → Medium
Revision history for this message
Edward Hope-Morley (hopem) wrote : Re: Acceleration cinder - glance with ceph not working

as an aside, i just tested the nova patch in a local deployment and it isn't actually sufficient to resolve the problem because the value stored by glance is also invalid - https://pastebin.ubuntu.com/p/mj9gFpSBMv/

i'll have a look see if i can get the glance value stripped of invalid characters when doing the comparison but otherwise ill have to patch glance as well.

Revision history for this message
melanie witt (melwitt) wrote :

I think either way, we would want a patch in glance as well, if it is using the fsid from ceph.

(later)

Looking through the code, I found that such a change has already been made:

https://github.com/openstack/glance_store/blob/6266ddf45f4d006cd7a8304224ec0ba0774489c1/glance_store/_drivers/rbd.py#L507

to fix this glance store bug:

https://bugs.launchpad.net/glance-store/+bug/1816721

This fix was released in glance_store 0.28.0.

Edward, is your local deployment using at least glance_store 0.28.0?

Revision history for this message
Edward Hope-Morley (hopem) wrote :

Looks like the glance issue was fixed using a separate bug - https://bugs.launchpad.net/glance-store/+bug/1816721 so at least that's done (but needs backporting to Rocky)

Revision history for this message
Edward Hope-Morley (hopem) wrote :

@melwitt sorry i didn't refresh the page before commenting so missed that you added your comment. So yeah, its just a matter of getting the various bits backported to R (and the nova part landed and backported to S and R)

Revision history for this message
Edward Hope-Morley (hopem) wrote :

glance stable/rocky backport submitted in bug 1816721

Changed in cinder (Ubuntu Eoan):
status: Triaged → Fix Released
no longer affects: cinder (Ubuntu Eoan)
Changed in cinder (Ubuntu Disco):
status: Triaged → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (master)

Reviewed: https://review.opendev.org/663607
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=e8b6b0bc78ec229803d1d27f8a4706e2c425bd77
Submitter: Zuul
Branch: master

commit e8b6b0bc78ec229803d1d27f8a4706e2c425bd77
Author: Edward Hope-Morley <email address hidden>
Date: Thu Jun 6 12:26:26 2019 +0100

    Fix python3 compatibility of rbd get_fsid

    In py3 librados's get_fsid() function returns a binary string
    which breaks comparison when compared with the same value as a
    string. This is currently breakin the logic that compares ceph
    cluster fsids when deciding whether the image used to boot an
    instance is cow-clonable.

    Change-Id: I79b40ca40400c67b0805926096317afd875ffabb
    Closes-Bug: #1816468

Changed in nova:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to cinder (stable/rocky)

Reviewed: https://review.opendev.org/664262
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=00cfd67d00885c32a9618734a102d7f5f9f26b39
Submitter: Zuul
Branch: stable/rocky

commit 00cfd67d00885c32a9618734a102d7f5f9f26b39
Author: Michal Arbet <email address hidden>
Date: Wed Feb 6 16:11:28 2019 +0100

    Fix python3 compatibility of rbd get_fsid

    In python3 librados's get_fsid() function is represented as binary.
    Because of this, conditional where fsid is compared with fsid parsed
    from glance's direct url is not true, then cinder-glance
    acceleration is not working. This patch is fixing this.
    More informations are in closing bug.

    Closes-Bug: #1816468

    Change-Id: I69d0685ec845355329f771f27f8894d9e988ae35
    (cherry picked from commit ed84f34562da21a53b6d45611a5bcf9e3edd1e84)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (stable/rocky)

Fix proposed to branch: stable/rocky
Review: https://review.opendev.org/664513

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (stable/stein)

Reviewed: https://review.opendev.org/664512
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=b4b47a59c7ff1d60e69dbdea07de9163948cb680
Submitter: Zuul
Branch: stable/stein

commit b4b47a59c7ff1d60e69dbdea07de9163948cb680
Author: Edward Hope-Morley <email address hidden>
Date: Thu Jun 6 12:26:26 2019 +0100

    Fix python3 compatibility of rbd get_fsid

    In py3 librados's get_fsid() function returns a binary string
    which breaks comparison when compared with the same value as a
    string. This is currently breakin the logic that compares ceph
    cluster fsids when deciding whether the image used to boot an
    instance is cow-clonable.

    Change-Id: I79b40ca40400c67b0805926096317afd875ffabb
    Closes-Bug: #1816468
    (cherry picked from commit e8b6b0bc78ec229803d1d27f8a4706e2c425bd77)

Revision history for this message
Corey Bryant (corey.bryant) wrote : Re: Acceleration cinder - glance with ceph not working

Cosmic is EOL. We can target Rocky directly.

Changed in nova (Ubuntu Cosmic):
status: Triaged → Won't Fix
Changed in cinder (Ubuntu Cosmic):
status: Triaged → Won't Fix
Revision history for this message
Corey Bryant (corey.bryant) wrote :

In Ubuntu this is already fixed in Stein and Train for Cinder, and not fixed in any releases for Nova. Needs fixing in Rocky for Cinder and Rocky+ for Nova.

Revision history for this message
Michal Arbet (michalarbet) wrote :

Hi, this should be fixed in ceph now as I reported a bug to ceph guys and now fixed I think..

https://tracker.ceph.com/issues/38381#change-139292
https://github.com/ceph/ceph/pull/26514

Revision history for this message
Corey Bryant (corey.bryant) wrote :

Thanks Michal

Changed in nova (Ubuntu Eoan):
status: Triaged → Fix Committed
Revision history for this message
Corey Bryant (corey.bryant) wrote :

New nova package versions with this fix have been uploaded to Ubuntu Eoan, Disco unapproved queue [1], and rocky-staging.

[1] https://launchpad.net/ubuntu/disco/+queue?queue_state=1&queue_text=nova

Revision history for this message
Corey Bryant (corey.bryant) wrote : Please test proposed package

Hello Michal, or anyone else affected,

Accepted cinder into rocky-proposed. The package will build now and be available in the Ubuntu Cloud Archive in a few hours, and then in the -proposed repository.

Please help us by testing this new package. To enable the -proposed repository:

  sudo add-apt-repository cloud-archive:rocky-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-rocky-needed to verification-rocky-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-rocky-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

tags: added: verification-rocky-needed
Revision history for this message
Corey Bryant (corey.bryant) wrote :

Hello Michal, or anyone else affected,

Accepted nova into rocky-proposed. The package will build now and be available in the Ubuntu Cloud Archive in a few hours, and then in the -proposed repository.

Please help us by testing this new package. To enable the -proposed repository:

  sudo add-apt-repository cloud-archive:rocky-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-rocky-needed to verification-rocky-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-rocky-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

Revision history for this message
Łukasz Zemczak (sil2100) wrote : Re: Acceleration cinder - glance with ceph not working

What is the test case and regression potential of the change? I will not be able to accept it as an SRU without the base SRU information being provided.

summary: - Acceleration cinder - glance with ceph not working
+ [SRU] Acceleration cinder - glance with ceph not working
Revision history for this message
Edward Hope-Morley (hopem) wrote :

@sil2100 hi ive added the sru template and test case sorry i forgot to add it before.

description: updated
Revision history for this message
Łukasz Zemczak (sil2100) wrote : Please test proposed package

Hello Michal, or anyone else affected,

Accepted nova into disco-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/nova/2:19.0.1-0ubuntu2 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested and change the tag from verification-needed-disco to verification-done-disco. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed-disco. In either case, without details of your testing we will not be able to proceed.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance for helping!

N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.

Changed in nova (Ubuntu Disco):
status: Triaged → Fix Committed
tags: added: verification-needed verification-needed-disco
Revision history for this message
Corey Bryant (corey.bryant) wrote :

Hello Michal, or anyone else affected,

Accepted nova into stein-proposed. The package will build now and be available in the Ubuntu Cloud Archive in a few hours, and then in the -proposed repository.

Please help us by testing this new package. To enable the -proposed repository:

  sudo add-apt-repository cloud-archive:stein-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-stein-needed to verification-stein-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-stein-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

tags: added: verification-stein-needed
Revision history for this message
Vlad Sorokin (vvsorokin) wrote :

Hi Corey,

I updated cinder-volume (and all dependent packages) to 2:13.0.5-0ubuntu1~cloud2 from cloud-archive:rocky-proposed PPA. However creating the image from volume still using download/convert/upload mechanism in /var/lib/cinder/conversion of Cinder controller. I was using the following command:
openstack image create --volume

FYI, upgrading glance-store-common to 0.26.1-0ubuntu2.2~cloud0 helped with creating image from volume - it uses Ceph COW now.

tags: added: verification-rocky-failed
removed: verification-rocky-needed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/cinder 13.0.6

This issue was fixed in the openstack/cinder 13.0.6 release.

Revision history for this message
Edward Hope-Morley (hopem) wrote :

Hi @vvsorokin can I get you to check something. If you upgrade to this new package but you had previously stored an image using the old broken package then you will need to manually modify the db entry for that image in the glance db to get it to work again (after upgrading). There is a bit more info in https://bugs.launchpad.net/cloud-archive/+bug/1816721. You also need to upgrade glance. So basically to fix this problem you need to:

 * upgrade cinder
 * upgrade glance
 * check glance db for incorrectly formatted image local locations in the image_locations table (see [Test Case] in 1816721)

Alternatively to messing with the db you could just delete and re-upload your glance image and that should give you the correct value in the database.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (stable/rocky)

Reviewed: https://review.opendev.org/664513
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=25fd08755e363f442f4ad390c1cb7de301fc7dfb
Submitter: Zuul
Branch: stable/rocky

commit 25fd08755e363f442f4ad390c1cb7de301fc7dfb
Author: Edward Hope-Morley <email address hidden>
Date: Thu Jun 6 12:26:26 2019 +0100

    Fix python3 compatibility of rbd get_fsid

    In py3 librados's get_fsid() function returns a binary string
    which breaks comparison when compared with the same value as a
    string. This is currently breakin the logic that compares ceph
    cluster fsids when deciding whether the image used to boot an
    instance is cow-clonable.

    Change-Id: I79b40ca40400c67b0805926096317afd875ffabb
    Closes-Bug: #1816468
    (cherry picked from commit e8b6b0bc78ec229803d1d27f8a4706e2c425bd77)
    (cherry picked from commit b4b47a59c7ff1d60e69dbdea07de9163948cb680)

Revision history for this message
Edward Hope-Morley (hopem) wrote :

Disco verified (nova) using [Test Case]

Test Output:

root@juju-935137-lp1816468-sru-disco-0:~# rbd -p nova info f507dc2b-fc6b-421c-be22-2fde84826099_disk| grep parent
        parent: glance/f0d95291-aaf7-4e8f-829d-515ca55ba874@snap

tags: added: verification-done-disco
removed: verification-needed-disco
Revision history for this message
Edward Hope-Morley (hopem) wrote :

Bionic + Stein verified (nova) using [Test Case]

Test Output:

root@juju-7f1874-lp1816468-sru-stein-0:~# rbd -p nova info 2923df76-c175-4862-a61a-71207390b4cb_disk| grep parent
        parent: glance/2ea4abff-a978-4b99-b717-5a3c8932b6f6@snap

tags: added: verification-stein-done
removed: verification-stein-needed
tags: added: verification-rocky-needed
removed: verification-rocky-failed
Revision history for this message
Edward Hope-Morley (hopem) wrote :

Bionic + Rocky verified (nova) using [Test Case]

Test Output:

root@juju-7ad500-lp1816468-sru-rocky-11:~# dpkg -l| grep nova-compute
ii nova-compute 2:18.2.1-0ubuntu1~cloud3 all OpenStack Compute - compute node base
ii nova-compute-kvm 2:18.2.1-0ubuntu1~cloud3 all OpenStack Compute - compute node (KVM)
ii nova-compute-libvirt 2:18.2.1-0ubuntu1~cloud3 all OpenStack Compute - compute node libvirt support

root@juju-7ad500-lp1816468-sru-rocky-0:~# rbd -p nova info 30503caf-e7a0-44d0-886d-36f92690ab88_disk| grep parent
        parent: glance/c981d74b-ef06-40e6-9e4c-199b99a49e82@snap

tags: added: verification-done verification-rocky-done
removed: verification-needed verification-rocky-needed
Revision history for this message
Corey Bryant (corey.bryant) wrote :

Additional testing notes. Regression testing passed on disco-proposed, stein-proposed:

disco-proposed:
======
Totals
======
Ran: 92 tests in 1108.7641 sec.
 - Passed: 84
 - Skipped: 8
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 0
Sum of execute time for each test: 536.1289 sec.

stein-proposed:
======
Totals
======
Ran: 92 tests in 1490.1213 sec.
 - Passed: 84
 - Skipped: 8
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 0
Sum of execute time for each test: 730.5460 sec.

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package nova - 2:19.0.1-0ubuntu2

---------------
nova (2:19.0.1-0ubuntu2) disco; urgency=medium

  * d/p/fix-python3-compatibility-of-rbd-get_fsid.patch: Cherry-picked from
    upstream stable/stein branch to ensure librados get_fsid return value
    is decoded (LP: #1816468).

 -- Corey Bryant <email address hidden> Mon, 08 Jul 2019 16:08:15 -0400

Changed in nova (Ubuntu Disco):
status: Fix Committed → Fix Released
Revision history for this message
Robie Basak (racb) wrote : Update Released

The verification of the Stable Release Update for nova has completed successfully and the package has now been released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Revision history for this message
Corey Bryant (corey.bryant) wrote :

The verification of the Stable Release Update for nova has completed successfully and the package has now been released to -updates. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Revision history for this message
Corey Bryant (corey.bryant) wrote :

This bug was fixed in the package nova - 2:19.0.1-0ubuntu2~cloud0
---------------

 nova (2:19.0.1-0ubuntu2~cloud0) bionic-stein; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 nova (2:19.0.1-0ubuntu2) disco; urgency=medium
 .
   * d/p/fix-python3-compatibility-of-rbd-get_fsid.patch: Cherry-picked from
     upstream stable/stein branch to ensure librados get_fsid return value
     is decoded (LP: #1816468).

Revision history for this message
Corey Bryant (corey.bryant) wrote :

2:18.2.1-0ubuntu1~cloud3 is in rocky-updates [1] and this was fixed in 2:18.2.0-0ubuntu2~cloud1
[1] http://ubuntu-cloud.archive.canonical.com/ubuntu/dists/bionic-updates/rocky/main/binary-amd64/Packages

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/nova 19.0.2

This issue was fixed in the openstack/nova 19.0.2 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/nova 18.2.2

This issue was fixed in the openstack/nova 18.2.2 release.

Changed in cloud-archive:
status: Fix Released → Fix Committed
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package nova - 2:20.0.0~b1~git2019061016.1316c1c285-0ubuntu4

---------------
nova (2:20.0.0~b1~git2019061016.1316c1c285-0ubuntu4) eoan; urgency=medium

  [ Sahid Orentino Ferdjaoui ]
  * SECURITY UPDATE: ensure underlying environment details not leaked in
    server fault
    - d/p/CVE-2019-14433.patch: replace non-admin server fault message
      with exception type class name in nova/compute/utils.py.
    - CVE-2019-14433

 -- Corey Bryant <email address hidden> Tue, 06 Aug 2019 12:08:30 -0400

Changed in nova (Ubuntu Eoan):
status: Fix Committed → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/nova 20.0.0.0rc1

This issue was fixed in the openstack/nova 20.0.0.0rc1 release candidate.

Revision history for this message
Corey Bryant (corey.bryant) wrote :

This is fix-released in the Train cloud archive since 20.0.0~rc1 and has been fix-released in cinder since 14.0.0~rc1.

Revision history for this message
Corey Bryant (corey.bryant) wrote :

This is fix-released in the Stein cloud archive for cinder only at this point (available since 14.0.0~rc1). For nova, version 2:19.0.1-0ubuntu2 will include this fix.

Revision history for this message
Corey Bryant (corey.bryant) wrote :

Nova in stein-proposed promotion is blocked by the following bugs:
https://bugs.launchpad.net/bugs/1833406 (unverified)
https://bugs.launchpad.net/bugs/1849192 (verified)

Revision history for this message
Corey Bryant (corey.bryant) wrote :

1833406 is now verified

Revision history for this message
Corey Bryant (corey.bryant) wrote :

nova 2:19.0.1-0ubuntu2 has been released for a while now.

Changed in cloud-archive:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.