Actually the snapshot_id field means that the volume is created from a snapshot..
As I see when executing 'lvdisplay' :
LV Name /dev/n-volumes/volume-528b8d8d-8067-436b-8677-3892369dce83
VG Name n-volumes
LV UUID gltzr2-2bWw-4vnV-CS4h-OICa-dlab-naKik2
LV Write Access read/write
LV snapshot status source of /dev/n-volumes/_snapshot-6863b9ff-3604-4966-83cf-c7fbf73ee056 [active]
There is a flag that the volume has an active snapshot. But it seems nova is using a higher-level driver - tgtadm to load volume data, and this flag is not available there.
Maybe it would be nice if the volume metadata is being updated with a flag when a snapshot is created/removed, and this flag to be passed...
One solution is to check first if there is a snapshot with volume_id == vol_id of the volume to be deleted, and to show warning/error message before the actual delete attempt.
Or just to catch the exception and show a hint what the reason could be ..
Actually the snapshot_id field means that the volume is created from a snapshot..
As I see when executing 'lvdisplay' :
LV Name /dev/n- volumes/ volume- 528b8d8d- 8067-436b- 8677-3892369dce 83 2bWw-4vnV- CS4h-OICa- dlab-naKik2
/dev/ n-volumes/ _snapshot- 6863b9ff- 3604-4966- 83cf-c7fbf73ee0 56 [active]
VG Name n-volumes
LV UUID gltzr2-
LV Write Access read/write
LV snapshot status source of
There is a flag that the volume has an active snapshot. But it seems nova is using a higher-level driver - tgtadm to load volume data, and this flag is not available there.
Maybe it would be nice if the volume metadata is being updated with a flag when a snapshot is created/removed, and this flag to be passed...
One solution is to check first if there is a snapshot with volume_id == vol_id of the volume to be deleted, and to show warning/error message before the actual delete attempt.
Or just to catch the exception and show a hint what the reason could be ..