Looking at the whole code around this in the netapp driver, this looks like legacy support for old netapp arrays but the netapp team can correct me.
The "creating a bootable volume from image operation" workflow starts with cinder calling clone_image[1] method of netapp nfs driver for efficient cloning.
Netapp driver tries to
1. finds the image in the cache (not image volume cache)[2]
2. directly clone it from mountpoint[1]
1. the cache is a file named img-cache-<image-id> in one of the nfs mounted shares[3]
Concern: this img-cache-<image-id> file is written when creating a bootable volume from image and is only possible with "earliest versions of FlexGroup[4]" as stated in the comment, which makes me feel this code is kept for backward compatibility.
2. checks if the image metadata has (type=nfs, share_location and mountpoint). If all 3 exist then continues with cloning[5]
Concern: not really sure who sets these properties in the image
And on top of all this, I'm not even sure if nfs:// prefix is used by any glance image. I'm aware that filesystem backend of glance store is used to emulate nfs behavior but that uses "file://" as prefix[6].
I agree with Brian's analysis that this can be treated as hardening opportunity but my concern is if the code in question is even used at this point and maybe a target for potential cleanup?
Looking at the whole code around this in the netapp driver, this looks like legacy support for old netapp arrays but the netapp team can correct me.
The "creating a bootable volume from image operation" workflow starts with cinder calling clone_image[1] method of netapp nfs driver for efficient cloning.
Netapp driver tries to
1. finds the image in the cache (not image volume cache)[2]
2. directly clone it from mountpoint[1]
1. the cache is a file named img-cache- <image- id> in one of the nfs mounted shares[3]
Concern: this img-cache- <image- id> file is written when creating a bootable volume from image and is only possible with "earliest versions of FlexGroup[4]" as stated in the comment, which makes me feel this code is kept for backward compatibility.
2. checks if the image metadata has (type=nfs, share_location and mountpoint). If all 3 exist then continues with cloning[5]
Concern: not really sure who sets these properties in the image
And on top of all this, I'm not even sure if nfs:// prefix is used by any glance image. I'm aware that filesystem backend of glance store is used to emulate nfs behavior but that uses "file://" as prefix[6].
I agree with Brian's analysis that this can be treated as hardening opportunity but my concern is if the code in question is even used at this point and maybe a target for potential cleanup?
[1] https:/ /github. com/openstack/ cinder/ blob/71c021c501 489840f92e60ba6 bcce5532b451dc8 /cinder/ volume/ drivers/ netapp/ dataontap/ nfs_base. py#L644 /github. com/openstack/ cinder/ blob/71c021c501 489840f92e60ba6 bcce5532b451dc8 /cinder/ volume/ drivers/ netapp/ dataontap/ nfs_base. py#L674- L679 /github. com/openstack/ cinder/ blob/71c021c501 489840f92e60ba6 bcce5532b451dc8 /cinder/ volume/ drivers/ netapp/ dataontap/ nfs_base. py#L533 /github. com/openstack/ cinder/ blob/71c021c501 489840f92e60ba6 bcce5532b451dc8 /cinder/ volume/ drivers/ netapp/ dataontap/ nfs_base. py#L509- L511 /github. com/openstack/ cinder/ blob/71c021c501 489840f92e60ba6 bcce5532b451dc8 /cinder/ volume/ drivers/ netapp/ dataontap/ nfs_base. py#L917- L923 /github. com/openstack/ glance_ store/blob/ 6f5011d1f05c998 94fb8b909d33ad2 3a20bf83a9/ glance_ store/_ drivers/ filesystem. py#L214
[2] https:/
[3] https:/
[4] https:/
[5] https:/
[6] https:/