If the latest incremental backup for a volume is deleted, then the next time an incremental backup is done for that volume, it fails and instead does a full backup saying that the snapshot does not exist.
9203 Jul 07 15:49:26 localhost.localdomain cinder-backup[23880]: DEBUG cinder.backup.drivers.ceph [None req-e8adeca3-4dc7-427f-8a77 -78d2280621ed admin None] returning [{'timestamp': u'1499456874.44', 'backup_id': u'3c4d0b03-1aee-4a3b-8386-0050befd0ffc', 'na me': u'backup.3c4d0b03-1aee-4a3b-8386-0050befd0ffc.snap.1499456874.44'}] {{(pid=23880) get_backup_snaps /opt/stack/cinder/cind er/backup/drivers/ceph.py:794}}
9204 Jul 07 15:49:26 localhost.localdomain cinder-backup[23880]: DEBUG cinder.backup.drivers.ceph [None req-e8adeca3-4dc7-427f-8a77 -78d2280621ed admin None] Using --from-snap 'backup.3c4d0b03-1aee-4a3b-8386-0050befd0ffc.snap.1499456874.44' for incremental b ackup of volume ebf16d3c-deb8-4168-9dfd-50d386860d42. {{(pid=23880) _backup_rbd /opt/stack/cinder/cinder/backup/drivers/ceph.p y:636}}
9205 Jul 07 15:49:26 localhost.localdomain cinder-backup[23880]: INFO cinder.backup.drivers.ceph [None req-e8adeca3-4dc7-427f-8a77- 78d2280621ed admin None] Snapshot='backup.3c4d0b03-1aee-4a3b-8386-0050befd0ffc.snap.1499456874.44' does not exist in base imag e='volume-ebf16d3c-deb8-4168-9dfd-50d386860d42.backup.base' - aborting incremental backup
9206 Jul 07 15:49:26 localhost.localdomain cinder-backup[23880]: DEBUG cinder.backup.drivers.ceph [None req-e8adeca3-4dc7-427f-8a77 -78d2280621ed admin None] Forcing full backup of volume ebf16d3c-deb8-4168-9dfd-50d386860d42. {{(pid=23880) backup /opt/stack/ cinder/cinder/backup/drivers/ceph.py:905}}
9207 Jul 07 15:49:26 localhost.localdomain cinder-backup[23880]: DEBUG cinder.backup.drivers.ceph [None req-e8adeca3-4dc7-427f-8a77 -78d2280621ed admin None] Creating backup base image='volume-ebf16d3c-deb8-4168-9dfd-50d386860d42.backup.7f27f402-31cb-49fc-ad 75-a52d6a5252aa' for volume ebf16d3c-deb8-4168-9dfd-50d386860d42. {{(pid=23880) _full_backup /opt/stack/cinder/cinder/backup/d rivers/ceph.py:736}}
9208 Jul 07 15:49:27 localhost.localdomain cinder-backup[23880]: DEBUG cinder.backup.drivers.ceph [None req-e8adeca3-4dc7-427f-8a77 -78d2280621ed admin None] Copying data from volume ebf16d3c-deb8-4168-9dfd-50d386860d42. {{(pid=23880) _full_backup /opt/stack /cinder/cinder/backup/drivers/ceph.py:745}}
How reproducible:
Everytime
Steps to Reproduce:
1. cinder backup-create 7f27f402-31cb-49fc-ad75-a52d6a5252aa (should create full backup)
2. cinder backup-create 7f27f402-31cb-49fc-ad75-a52d6a5252aa --force --incremental (should create incremental backup)
3. cinder backup-delete 3c4d0b03-1aee-4a3b-8386-0050befd0ffc (the backup ID of the backup created in step 2)
3. cinder backup-create 7f27f402-31cb-49fc-ad75-a52d6a5252aa --force --incremental (should create incremental because it still should have other snapshots to create an incremental backup from but instead creates a full backup without any warning to the user)
Sounds about right, the fix could go like this:
- We need to keep all source volume's backup snapshots (I believe we are currently only keeping the last one)
- Whenever we delete a backup we delete its corresponding source volume snapshot.
I believe that would not only solve this problem but also other potential problems that would appear when you start deleting backups that are not the last one.