Ceph backup-restore fails if original volume got deleted
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Fix Released
|
Undecided
|
Edward Hope-Morley |
Bug Description
Setup:
Backup driver set to "ceph". The swift driver works with this scenario.
Steps:
1. cinder create 1 --display-
2. cinder backup-create <volume ID from QA_acceptance volume>
3. watch -n 0 -d cinder backup-list # wait for backup to complete
4. cinder delete <volume ID from QA_acceptance volume>
5. cinder backup-restore <backup ID of the created backup - taken from step 3>
6. cinder list | grep restore_
Result:
2013-07-10 09:44:28 ERROR [cinder.
Traceback (most recent call last):
File "/usr/lib/
rval = self.proxy.
File "/usr/lib/
return getattr(proxyobj, method)(ctxt, **kwargs)
File "/usr/lib/
{'status': 'available'})
File "/usr/lib/
self.gen.next()
File "/usr/lib/
backup_service)
File "/usr/lib/
backup_
File "/usr/lib/
volume = self.db.
File "/usr/lib/
return IMPL.volume_
File "/usr/lib/
return f(*args, **kwargs)
File "/usr/lib/
raise exception.
VolumeNotFound: Volume 31ecd14f-
cinder list status: error_restoring
Expected result:
Restore should always work even if the original volume got deleted.
tags: | added: backup-service ceph |
Changed in cinder: | |
status: | New → In Progress |
Changed in cinder: | |
milestone: | none → havana-2 |
status: | Fix Committed → Fix Released |
Changed in cinder: | |
milestone: | havana-2 → 2013.2 |
The issue here is that the restore is using the wrong volume_id i.e. always that of the original volume. This has been fixed in https:/ /review. openstack. org/#/c/ 35216/
Not sure whether to backport fix or wait for that review to be finished?