Creating a Cinder volume from an image ID fails to copy the image to the volume

Bug #1183283 reported by Kashyap Chamarthy on 2013-05-23
14
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Cinder
High
John Griffith
Grizzly
High
Adam Gandelman

Bug Description

Description
~~~~~~~~~~
Set up a Fresh devstack install on Fedora-19, and an attempt to create a cinder volume fails saying: Logical Volume already exist. This behavior is reproducible. Also tried removing the LV manually - more info on this below)

Some details
~~~~~~~~~~~

From /opt/stack/nova

  $ git log | head -1
commit 7303b93e2361ec9f96db02ab7fcbb70c5b2765cd

From ~/src/devstack/

  $ git log | head -1
683ef75510389d124421f0019df11f73b6959cd9

Setup
~~~~~

(0) Configure devstack, source the keystone credentials.
===
$./stack.sh
$ . openrc
===

(1) List existing images
===
$ glance image-list
+--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+
| dfe3df69-fe3b-4d0f-8cfe-f4c28a0b3d77 | cirros-0.3.0-x86_64-uec | ami | ami | 25165824 | active |
| 7cd04f66-6280-4b01-affa-15a4e2344cfe | cirros-0.3.0-x86_64-uec-kernel | aki | aki | 4731440 | active |
| 2ba1b137-6a37-44b1-bcab-b14d7e5ebf9a | cirros-0.3.0-x86_64-uec-ramdisk | ari | ari | 2254249 | active |
| 52ff39a7-470c-4cbd-b2c2-83420af2a016 | f17-x86_64-openstack-sda | qcow2 | bare | 251985920 | active |
| 2f9e7f1f-778f-45e6-87d9-bc6146f18b3d | Fedora18-Cloud-x86_64-latest | qcow2 | bare | 227409920 | active |
+--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+
===

(2) Create a volume
===
$ cinder create --display-name image_vol-test --image_id 52ff39a7-470c-4cbd-b2c2-83420af2a016 1
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2013-05-23T08:29:54.897325 |
| display_description | None |
| display_name | image_vol-test |
| id | 0894c5fb-8565-4ce1-8c84-3bedbb80f6ad |
| image_id | 52ff39a7-470c-4cbd-b2c2-83420af2a016 |
| metadata | {} |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+
===

(3) List the just created Volume
===
$ cinder list
+--------------------------------------+--------+----------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+----------------+------+-------------+----------+-------------+
| 0894c5fb-8565-4ce1-8c84-3bedbb80f6ad | error | image_vol-test | 1 | None | false | |
+--------------------------------------+--------+----------------+------+-------------+----------+-------------+
===

(4) From the screen logs:
===
2013-05-23 04:55:40.411 TRACE cinder.volume.driver Exit code: 5
2013-05-23 04:55:40.411 TRACE cinder.volume.driver Stdout: ''
2013-05-23 04:55:40.411 TRACE cinder.volume.driver Stderr: ' Logical volume "volume-b1f03e26-2a80-4a9d-9c2b-428d62e3fe29" already exists in volume group "stack-volumes"\n'
2013-05-23 04:55:40.411 TRACE cinder.volume.driver
===

Further Investigation:
~~~~~~~~~~~~~~~~~~
Try to remove the Logical Volume manually, and re try to create the volume
=====
$ ls /dev/stack-volumes/
volume-b1f03e26-2a80-4a9d-9c2b-428d62e3fe29
=====
$ ls -al /dev/stack-volumes/volume-b1f03e26-2a80-4a9d-9c2b-428d62e3fe29
lrwxrwxrwx. 1 root root 7 May 23 04:55 /dev/stack-volumes/volume-b1f03e26-2a80-4a9d-9c2b-428d62e3fe29 -> ../dm-2
=====
$ sudo -i
=====
$ rm /dev/dm-2
rm: remove block special file ‘/dev/dm-2’? y
=====

Create the volume again:
=====
$ cinder delete b1f03e26-2a80-4a9d-9c2b-428d62e3fe29
=====
$ cinder create --display-name image_vol-test3 --image_id 52ff39a7-470c-4cbd-b2c2-83420af2a016 1
=====
$ cinder list
+--------------------------------------+--------+-----------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+-----------------+------+-------------+----------+-------------+
| c25c7c90-9bcf-47de-b525-6254c82f81fe | error | image_vol-test3 | 1 | None | false | |
+--------------------------------------+--------+-----------------+------+-------------+----------+-------------+
===

Kashyap Chamarthy (kashyapc) wrote :

More contextual information from the SCREEN log:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

[...]
2013-05-23 04:55:39.218 TRACE cinder.volume.driver Traceback (most recent call last):
2013-05-23 04:55:39.218 TRACE cinder.volume.driver File "/opt/stack/cinder/cinder/volume/driver.py", line 88, in _try_execute
2013-05-23 04:55:39.218 TRACE cinder.volume.driver self._execute(*command, **kwargs)
2013-05-23 04:55:39.218 TRACE cinder.volume.driver File "/opt/stack/cinder/cinder/utils.py", line 193, in execute
2013-05-23 04:55:39.218 TRACE cinder.volume.driver cmd=' '.join(cmd))
2013-05-23 04:55:39.218 TRACE cinder.volume.driver ProcessExecutionError: Unexpected error while running command.
2013-05-23 04:55:39.218 TRACE cinder.volume.driver Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf lvcreate -L 1G -n volume-b1f03e26-2a80-4a9d-9c2b-428d62e3fe29 stack-volumes
2013-05-23 04:55:39.218 TRACE cinder.volume.driver Exit code: 5
2013-05-23 04:55:39.218 TRACE cinder.volume.driver Stdout: ''
2013-05-23 04:55:39.218 TRACE cinder.volume.driver Stderr: ' Logical volume "volume-b1f03e26-2a80-4a9d-9c2b-428d62e3fe29" already exists in volume group "stack-volumes"\n'
2013-05-23 04:55:39.218 TRACE cinder.volume.driver
2013-05-23 04:55:40.220 DEBUG cinder.utils [req-6367045a-60d7-43de-b011-442de86000ae f55e931102fa473083b5ac048b805a97 988c13aa9fb64d45a8999ccd4425be1f] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf lvcreate -L 1G -n volume-b1f03e26-2a80-4a9d-9c2b-428d62e3fe29 stack-volumes from (pid=12748) execute /opt/stack/cinder/cinder/utils.py:169
2013-05-23 04:55:40.410 DEBUG cinder.utils [req-6367045a-60d7-43de-b011-442de86000ae f55e931102fa473083b5ac048b805a97 988c13aa9fb64d45a8999ccd4425be1f] Result was 5 from (pid=12748) execute /opt/stack/cinder/cinder/utils.py:186
2013-05-23 04:55:40.411 ERROR cinder.volume.driver [req-6367045a-60d7-43de-b011-442de86000ae f55e931102fa473083b5ac048b805a97 988c13aa9fb64d45a8999ccd4425be1f] Recovering from a failed execute. Try number 2
2013-05-23 04:55:40.411 TRACE cinder.volume.driver Traceback (most recent call last):
2013-05-23 04:55:40.411 TRACE cinder.volume.driver File "/opt/stack/cinder/cinder/volume/driver.py", line 88, in _try_execute
2013-05-23 04:55:40.411 TRACE cinder.volume.driver self._execute(*command, **kwargs)
2013-05-23 04:55:40.411 TRACE cinder.volume.driver File "/opt/stack/cinder/cinder/utils.py", line 193, in execute
2013-05-23 04:55:40.411 TRACE cinder.volume.driver cmd=' '.join(cmd))
2013-05-23 04:55:40.411 TRACE cinder.volume.driver ProcessExecutionError: Unexpected error while running command.
2013-05-23 04:55:40.411 TRACE cinder.volume.driver Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf lvcreate -L 1G -n volume-b1f03e26-2a80-4a9d-9c2b-428d62e3fe29 stack-volumes
2013-05-23 04:55:40.411 TRACE cinder.volume.driver Exit code: 5
2013-05-23 04:55:40.411 TRACE cinder.volume.driver Stdout: ''
2013-05-23 04:55:40.411 TRACE cinder.volume.driver Stderr: ' Logical volume "volume-b1f03e26-2a80-4a9d-9c2b-428d62e3fe29" already exists in volume group "stack-volumes"\n'
2013-05-23 04:55:40.411 TRACE cinder.volume.driver
[...]

Kashyap Chamarthy (kashyapc) wrote :

Just to note:

After removing the LV

  $ rm /dev/dm-2
rm: remove block special file ‘/dev/dm-2’? y

I also, explicitly removed the symlink to it:

  $ rm /dev/stack-volumes/volume-b1f03e26-2a80-4a9d-9c2b-428d62e3fe29

And repeated creation of cinder volume, to no avail.

John Griffith (john-griffith) wrote :

I'd recommend doing a fresh devstack install, also what you want to inspect here is the output from "sudo lvs" and if you want to do manual cleanup do "sudo lvremove".

I'm not able to reproduce this at all and the only thing that I can see happening here is that the lvcreate call is somehow called twice...

Kashyap Chamarthy (kashyapc) wrote :

Question: In this context, how does it effect if you remove it via lvremove, or explicitly deleting the block device.

Anyhow, I tried w/ lvremove, still no dice.

=====
$ sudo lvs
  LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
  root fedora -wi-ao--- 15.57g
  swap fedora -wi-ao--- 3.94g
  volume-c875158c-e130-4c65-b9f7-e3975b23a8b3 stack-volumes -wi-a---- 1.00g
=====
$ sudo lvremove stack-volumes
Do you really want to remove active logical volume volume-c875158c-e130-4c65-b9f7-e3975b23a8b3? [y/n]: y
  Logical volume "volume-c875158c-e130-4c65-b9f7-e3975b23a8b3" successfully removed

$ cinder create --display-name image_vol-test5 --image_id 52ff39a7-470c-4cbd-b2c2-83420af2a016 1
=====
$ cinder list
+--------------------------------------+--------+-----------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+-----------------+------+-------------+----------+-------------+
| c875158c-e130-4c65-b9f7-e3975b23a8b3 | error | image_vol-test4 | 1 | None | false | |
| d25b8bf8-8441-4355-8e74-3a4caa6eedf6 | error | image_vol-test5 | 1 | None | false | |
+--------------------------------------+--------+-----------------+------+-------------+----------+-------------+
=====

Let me try a fresh devstack install (# git pull devstack and all repos in /opt/stack, ./unstack && ./stack).

And, yes I noticed too that 'lvcreate' is being called twice. Will see

Kashyap Chamarthy (kashyapc) wrote :
Download full text (7.3 KiB)

Ok, I could consistently reproduce on F19, w/ latest devstack, at least.

This is how I gave a re-try. Am I still doing something incorrect here ?

# Update all git repositories
$ cd /opt/stack/ ; for i in `echo cinder glance nova horizon keystone noVNC \
pbr python-cinderclient python-glanceclient python-keystoneclient python-novaclient \
python-openstackclient \python-quantumclient python-swiftclient tempest`; \
do cd $i; git pull; cd ..; done

# Add a patch that lets Fedora-19+ systems can have devstack successfully configured
$ git fetch https://review.openstack.org/openstack-dev/devstack refs/changes/84/29784/1 && git checkout FETCH_HEAD

# Cleanup; fresh setup
$ ./unstack.sh && ./stack.sh
[...]
2013-05-24 04:04:16 + set +o xtrace
2013-05-24 04:04:16 stack.sh completed in 148 seconds.

# List the images
$ glance image-list
+--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+
| a5dc830a-2a9f-410c-a435-b49e7fb4abc6 | cirros-0.3.0-x86_64-uec | ami | ami | 25165824 | active |
| af6cc743-d24e-4e16-85c3-bf02221d14bd | cirros-0.3.0-x86_64-uec-kernel | aki | aki | 4731440 | active |
| 156f93d0-1808-4f72-9401-0cd43679f467 | cirros-0.3.0-x86_64-uec-ramdisk | ari | ari | 2254249 | active |
| 9d517cee-92d0-4975-8346-2bd2fd4ecf47 | f17-x86_64-openstack-sda | qcow2 | bare | 251985920 | active |
| a20e86fd-08ce-4245-b01b-0be23540de3f | Fedora18-Cloud-x86_64-latest | qcow2 | bare | 227409920 | active |
+--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+

# Create a volume
$ cinder create --display-name image_vol-test-n1 --image_id 9d517cee-92d0-4975-8346-2bd2fd4ecf47 1
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2013-05-24T08:05:36.135398 |
| display_description | None |
| display_name | image_vol-test-n1 |
| id | 94ac47f5-71d8-4adf-aa15-cbd758ed2b7f |
| image_id | 9d517cee-92d0-4975-8346-2bd2fd4ecf47 |
| metadata | {} |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+----...

Read more...

John Griffith (john-griffith) wrote :

Hi Kashyap, thanks for all the extra work and effort on this. It appears that there's an unexpected response from the create call on your setup (possibly Fedora 19 related), so what happens is the driver interprets the call to lvcreate as NOT succeeding and it issues the retry. The problem is that the lvcreate command actually was succesful, so the subsequent retry gets a *real* error of "logical volume already exists".

I'm wondering if you could look a bit further up in the logs and see the logged error for the first call to lvcreate? Our answer is likely there, and it may be that Fedora 19 is reporting something unique in it's response that we're not handling correctly, or there's some other error that we're not accounting for and we don't realize we shouldn't retry on.

Kashyap Chamarthy (kashyapc) wrote :

Hi John, thanks for your response.

I did note the 1st call to "lvcreate" in comment -5. Please grep for "Try number 1" -- https://bugs.launchpad.net/cinder/+bug/1183283/comments/5

I presume that is what you're looking for ?

Kashyap Chamarthy (kashyapc) wrote :

On a related note, I found tgtd wasn't running. I started tgtd service, cleaned up all volumes, created a new volume. Same result:

  $ sudo lvs

  $ sudo lvremove stack-volumes

  $ cinder create --display-name image_vol-test-may26 \
   --image_id 9d517cee-92d0-4975-8346-2bd2fd4ecf47 1

  $ cinder list

  $ less -R /home/kashyap/src/devstack-2-logs/data/logs/screen-c-vol.2013-05-24-040145.log
.
.
.
 from (pid=29275) log_http_response /opt/stack/python-glanceclient/glanceclient/common/http.py:143
2013-05-24 04:07:14.249 DEBUG cinder.utils [req-ddf67a1e-8f95-43f2-aefa-223024fed31d 2dbbf38c7ac344c99c4b2a5813733454 273e1a6b14db4490a9a87e0e6584fd13] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf lvcreate -L 1G -n volume-c2094b47-f60f-481b-b49d-46e7ebe7c15c stack-volumes from (pid=29275) execute /opt/stack/cinder/cinder/utils.py:169
2013-05-24 04:07:14.443 DEBUG cinder.utils [req-ddf67a1e-8f95-43f2-aefa-223024fed31d 2dbbf38c7ac344c99c4b2a5813733454 273e1a6b14db4490a9a87e0e6584fd13] Result was 5 from (pid=29275) execute /opt/stack/cinder/cinder/utils.py:186
2013-05-24 04:07:14.444 ERROR cinder.volume.driver [req-ddf67a1e-8f95-43f2-aefa-223024fed31d 2dbbf38c7ac344c99c4b2a5813733454 273e1a6b14db4490a9a87e0e6584fd13] Recovering from a failed execute. Try number 1
2013-05-24 04:07:14.444 TRACE cinder.volume.driver Traceback (most recent call last):
2013-05-24 04:07:14.444 TRACE cinder.volume.driver File "/opt/stack/cinder/cinder/volume/driver.py", line 88, in _try_execute
2013-05-24 04:07:14.444 TRACE cinder.volume.driver self._execute(*command, **kwargs)
2013-05-24 04:07:14.444 TRACE cinder.volume.driver File "/opt/stack/cinder/cinder/utils.py", line 193, in execute
2013-05-24 04:07:14.444 TRACE cinder.volume.driver cmd=' '.join(cmd))
2013-05-24 04:07:14.444 TRACE cinder.volume.driver ProcessExecutionError: Unexpected error while running command.
2013-05-24 04:07:14.444 TRACE cinder.volume.driver Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf lvcreate -L 1G -n volume-c2094b47-f60f-481b-b49d-46e7ebe7c15c stack-volumes
2013-05-24 04:07:14.444 TRACE cinder.volume.driver Exit code: 5
2013-05-24 04:07:14.444 TRACE cinder.volume.driver Stdout: ''
2013-05-24 04:07:14.444 TRACE cinder.volume.driver Stderr: ' Logical volume "volume-c2094b47-f60f-481b-b49d-46e7ebe7c15c" already exists in volume group "stack-volumes"\n'
2013-05-24 04:07:14.444 TRACE cinder.volume.driver
==============================================

John Griffith (john-griffith) wrote :

Try 1 is actually the subsequent call that failed, the try count starts at 0, there should be some error message in the logs prior to where this segment of the log picked up.

Kashyap Chamarthy (kashyapc) wrote :
Download full text (4.0 KiB)

Oh, I haven't paid attention to these messages as they appeared in green :( , when I did "less -R /path/to/screen log" :

 "error while converting raw: No space left on device\\\\n\'\\n"]},"

=====
.
.
.
4c99c4b2a5813733454", "project_id": "273e1a6b14db4490a9a87e0e6584fd13", "id": "c2094b47-f60f-481b-b49d-46e7ebe7c15c", "size": 1}}, "volume_id": "c2094b47-f60f-481b-b49d-46e7ebe7c15c", "filter_properties": {"config_options": {}, "user_id": "2dbbf38c7ac344c99c4b2a5813733454", "availability_zone": "nova", "volume_type": {}, "request_spec": {"volume_id": "c2094b47-f60f-481b-b49d-46e7ebe7c15c", "volume_properties": {"status": "creating", "volume_type_id": null, "display_name": "image_vol-test-n2", "availability_zone": "nova", "attach_status": "detached", "source_volid": null, "metadata": {}, "volume_metadata": [], "display_description": null, "snapshot_id": null, "user_id": "2dbbf38c7ac344c99c4b2a5813733454", "project_id": "273e1a6b14db4490a9a87e0e6584fd13", "id": "c2094b47-f60f-481b-b49d-46e7ebe7c15c", "size": 1}, "volume_type": {}, "image_id": "9d517cee-92d0-4975-8346-2bd2fd4ecf47", "source_volid": null, "snapshot_id": null, "resource_properties": {"status": "creating", "volume_type_id": null, "display_name": "image_vol-test-n2", "availability_zone": "nova", "size": 1, "attach_status": "detached", "source_volid": null, "volume_metadata": [], "display_description": null, "snapshot_id": null, "user_id": "2dbbf38c7ac344c99c4b2a5813733454", "project_id": "273e1a6b14db4490a9a87e0e6584fd13", "id": "c2094b47-f60f-481b-b49d-46e7ebe7c15c", "metadata": {}}}, "retry": {"num_attempts": 1, "hosts": ["devstack1labengpnqredhatcom"], "exc": ["Traceback (most recent call last):\\n", " File \\"/opt/stack/cinder/cinder/volume/manager.py\\", line 254, in create_volume\\n image_location)\\n", " File \\"/opt/stack/cinder/cinder/volume/manager.py\\", line 193, in _create_volume\\n image_id)\\n", " File \\"/opt/stack/cinder/cinder/volume/manager.py\\", line 607, in _copy_image_to_volume\\n image_id)\\n", " File \\"/opt/stack/cinder/cinder/volume/drivers/lvm.py\\", line 257, in copy_image_to_volume\\n self.local_path(volume))\\n", " File \\"/opt/stack/cinder/cinder/image/image_utils.py\\", line 242, in fetch_to_raw\\n convert_image(tmp, dest, \'raw\')\\n", " File \\"/opt/stack/cinder/cinder/image/image_utils.py\\", line 191, in convert_image\\n utils.execute(*cmd, run_as_root=True)\\n", " File \\"/opt/stack/cinder/cinder/utils.py\\", line 193, in execute\\n cmd=\' \'.join(cmd))\\n", "ProcessExecutionError: Unexpected error while running command.\\nCommand: sudo cinder-rootwrap /etc/cinder/rootwrap.conf qemu-img convert -O raw /tmp/tmppLr7l6 /dev/mapper/stack--volumes-volume--c2094b47--f60f--481b--b49d--46e7ebe7c15c\\nExit code: 1\\nStdout: \'\'\\nStderr: \'qemu-img: /dev/mapper/stack--volumes-volume--c2094b47--f60f--481b--b49d--46e7ebe7c15c: error while converting raw: No space left on device\\\\n\'\\n"]}, "metadata": {}, "resource_type": {}, "size": 1}, "topic": "cinder-volume", "image_id": "9d517cee-92d0-4975-8346-2bd2fd4ecf47", "snapshot_id": null}, "_unique_id": "dd243d0234c341e6885179ff0aacd8e5", "_context_timestamp": "2013-05-24T08:...

Read more...

Kashyap Chamarthy (kashyapc) wrote :

Ok, we seemed to have got this to a conclusion: Cinder is failing to copy the image to the volume.

Some contextual discussion from #openstack-qa:
========
...
jgriffith> kashyap: interesting
<jgriffith> kashyap: I didn't see your command entry asking it do do a copy of the image
<kashyap> Yeah
<jgriffith> kashyap: however that's what it appears to be doing ?
<jgriffith> kashyap: cinder allows you to write an image to a volume, but it's only when you specify an image-id on the create command
<jgriffith> kashyap: the failure is not the volume creation, it's the copy of the image to the volume
<kashyap> cinder create --display-name image_vol-test --image_id 52ff39a7-470c-4cbd-b2c2-83420af2a016 1
<kashyap> jgriffith, Yes, I see it.
<jgriffith> kashyap: so the real bug is our failure to handle that propertly
<jgriffith> properly
<jgriffith> kashyap: if you want to modify the bug you entered I'll try and get it fixed in the next day or so
jgriffith> kashyap: yes, the summary and perhaps the title could be updated to point out exactly what the issue is, and you can assign it to me :)
========

========
$ sudo vgs
  VG #PV #LV #SN Attr VSize VFree
  fedora 1 2 0 wz--n- 19.51g 0
  stack-volumes 1 1 0 wz--n- 5.01g 4.01g
========

summary: - On a fresh devstack install, creating a new cinder volume fails with
- "Logical volume" exists.
+ Creating a Cinder volume from an image ID fails to copy the image to the
+ volume
Changed in cinder:
status: New → Confirmed
Changed in cinder:
assignee: nobody → John Griffith (john-griffith)
importance: Undecided → High
milestone: none → havana-1
tags: added: grizzly-backport-potential

Fix proposed to branch: master
Review: https://review.openstack.org/30645

Changed in cinder:
status: Confirmed → In Progress
Changed in cinder:
assignee: John Griffith (john-griffith) → Mike Perez (thingee)
Mike Perez (thingee) on 2013-05-29
Changed in cinder:
status: In Progress → Won't Fix
status: Won't Fix → Fix Committed

Reviewed: https://review.openstack.org/30645
Committed: http://github.com/openstack/cinder/commit/b2371aeff9eccbd28952dd0f568da26722dc58f7
Submitter: Jenkins
Branch: master

commit b2371aeff9eccbd28952dd0f568da26722dc58f7
Author: John Griffith <email address hidden>
Date: Mon May 27 20:04:56 2013 +0000

    Catch and report errors from copy image to volume.

    The copy image-to-volume errors weren't being handled
    properly and the result was the lvcreate being retried
    even though the lvcreate itself succeeded.

    The result of this was misleading errors stating that
    the volume couldn't be created because it already existed
    (which it did, becuase the create itself was succesful).

    Fixes bug: 1183283

    Change-Id: I23f05fe64475c3efe285e05a258c4625b801375c

Thierry Carrez (ttx) on 2013-05-29
Changed in cinder:
status: Fix Committed → Fix Released

Reviewed: https://review.openstack.org/30967
Committed: http://github.com/openstack/cinder/commit/e7d973c51445fac372d5fe4641517797f0803e4b
Submitter: Jenkins
Branch: stable/grizzly

commit e7d973c51445fac372d5fe4641517797f0803e4b
Author: John Griffith <email address hidden>
Date: Mon May 27 20:04:56 2013 +0000

    Catch and report errors from copy image to volume.

    The copy image-to-volume errors weren't being handled
    properly and the result was the lvcreate being retried
    even though the lvcreate itself succeeded.

    The result of this was misleading errors stating that
    the volume couldn't be created because it already existed
    (which it did, becuase the create itself was succesful).

    Fixes bug: 1183283

    Change-Id: I23f05fe64475c3efe285e05a258c4625b801375c
    (cherry picked from commit b2371aeff9eccbd28952dd0f568da26722dc58f7)

tags: removed: grizzly-backport-potential
Thierry Carrez (ttx) on 2013-10-17
Changed in cinder:
milestone: havana-1 → 2013.2
Mike Perez (thingee) on 2014-06-25
Changed in cinder:
assignee: Mike Perez (thingee) → John Griffith (john-griffith)
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers