In case, we want to include snapshot capacity space in provisioned_capacity_gb, it will lead to the side effects of volume/snapshot provisioning will fail easily when the pool reaches max provisioned_capacity_gb.
INFO: Max provisioned_capacity_gb for a pool is calcualted by the formula of max_over_subscription_ratio*total_capacity_gb
Example: Say, for a pool, we have a total_capacity_gb as 25GB and max_over_subscription_ratio is 20. This config will make sure, user can reach a max of provisioned_capacity_gb as 500 GB.
In this scenario, if user creates a workload volume, say volume1 with 100 GB, the pool will have allocated_capacity_gb = 100 GB, provisioned_capacity_gb = 100 GB.
Later, if user creates 4 snapshots for volume1, the pool's allocated_capacity_gb = 100 GB and provisioned_capacity_gb = 500 GB.
Now, if user tries to create a volume of even 1GB, it will fail because pool has already reached the maximum of provisioned_capacity_gb.
I feel that, we should not include snapshot capacity into provisioned_capacity_gb due to the above. If you think it in other ways, pls revert back. We can have a call to discuss it further.
@michael,
Thank you so much for getting into this topic, and it's one of the grey areas always.
Current implementation: capacity_ gb will not include snapshot capacity space and this is as per the design and also the documentaion states the same. /specs. openstack. org/openstack/ cinder- specs/specs/ queens/ provisioning- improvements. html
provisioned_
Doc-link: https:/
In case, we want to include snapshot capacity space in provisioned_ capacity_ gb, it will lead to the side effects of volume/snapshot provisioning will fail easily when the pool reaches max provisioned_ capacity_ gb. capacity_ gb for a pool is calcualted by the formula of max_over_ subscription_ ratio*total_ capacity_ gb
INFO: Max provisioned_
Example: Say, for a pool, we have a total_capacity_gb as 25GB and max_over_ subscription_ ratio is 20. This config will make sure, user can reach a max of provisioned_ capacity_ gb as 500 GB. capacity_ gb = 100 GB, provisioned_ capacity_ gb = 100 GB. capacity_ gb = 100 GB and provisioned_ capacity_ gb = 500 GB. capacity_ gb.
In this scenario, if user creates a workload volume, say volume1 with 100 GB, the pool will have allocated_
Later, if user creates 4 snapshots for volume1, the pool's allocated_
Now, if user tries to create a volume of even 1GB, it will fail because pool has already reached the maximum of provisioned_
I feel that, we should not include snapshot capacity into provisioned_ capacity_ gb due to the above. If you think it in other ways, pls revert back. We can have a call to discuss it further.