So, if you will have many thin volumes, that don't have much data on it free_capacity_gb will be sufficient to deploy additional volumes, but actual oversubscription will be 3 times higher, than calculated in filter. As a result - we will have 3 times more thin volumes than planned. So we lost oversubscription control. Over time thin volume on ceph became thick and we will get ceph exhausted much easy than with correct calculation of provisioned_capacity_gb (based on allocated_capacity_gb)
Not exactly for all cases.
for example, cinder rbd driver doesn't report provisioned_ capacity_ gb:
2022-11-26 17:23:15.746 8 DEBUG cinder. scheduler. host_manager [req-f39ed266- c6c4-415b- b5a8-2ec2170c5f c4 - - - - -] Received volume service update from compute0. ipo-region@ rbd-1: capacity_ gb': 27.24, 'free_capacity_gb': 27.23, 'reserved_ percentage' : 0, 'multiattach': True, 'thin_provision ing_support' : True, over_subscripti on_ratio' : '20.0', 'location_info': 'ceph:/ etc/ceph/ ceph.conf: 587501de- 69b3-11ed- bdd6-dd57b05661 dd:cinder: volumes' , 'backend_state': 'up', 'volume_ backend_ name': 'rbd-1', 'replication_ enabled' : False, capacity_ gb': 0, 'filter_function': None, 'goodness_ function' : None} update_ service_ capabilities /var/lib/ kolla/venv/ lib/python3. 8/site- packages/ cinder/ scheduler/ host_manager. py:575 scheduler. host_manager [req-f39ed266- c6c4-415b- b5a8-2ec2170c5f c4 - - - - -]
{'vendor_name': 'Open Source', 'driver_version': '1.2.0', 'storage_protocol': 'ceph', 'total_
'max_
'allocated_
2022-11-26 17:23:15.752 8 DEBUG cinder.
In this case host manger will set provisioned_ capacity_ gb based on allocated_ capacity_ gb:
https:/ /github. com/openstack/ cinder/ blob/master/ cinder/ scheduler/ host_manager. py#L434
And, finally, provisioned_ capacity_ gb is used in capacity filter:
https:/ /github. com/openstack/ cinder/ blob/master/ cinder/ scheduler/ filters/ capacity_ filter. py#L148.
So, if you will have many thin volumes, that don't have much data on it free_capacity_gb will be sufficient to deploy additional volumes, but actual oversubscription will be 3 times higher, than calculated in filter. As a result - we will have 3 times more thin volumes than planned. So we lost oversubscription control. Over time thin volume on ceph became thick and we will get ceph exhausted much easy than with correct calculation of provisioned_ capacity_ gb (based on allocated_ capacity_ gb)