Comment 8 for bug 1927186

Revision history for this message
Ilya Popov (ilya-p) wrote :

Not exactly for all cases.

for example, cinder rbd driver doesn't report provisioned_capacity_gb:

    2022-11-26 17:23:15.746 8 DEBUG cinder.scheduler.host_manager [req-f39ed266-c6c4-415b-b5a8-2ec2170c5fc4 - - - - -] Received volume service update from compute0.ipo-region@rbd-1:
    {'vendor_name': 'Open Source', 'driver_version': '1.2.0', 'storage_protocol': 'ceph', 'total_capacity_gb': 27.24, 'free_capacity_gb': 27.23, 'reserved_percentage': 0, 'multiattach': True, 'thin_provisioning_support': True,
    'max_over_subscription_ratio': '20.0', 'location_info': 'ceph:/etc/ceph/ceph.conf:587501de-69b3-11ed-bdd6-dd57b05661dd:cinder:volumes', 'backend_state': 'up', 'volume_backend_name': 'rbd-1', 'replication_enabled': False,
    'allocated_capacity_gb': 0, 'filter_function': None, 'goodness_function': None} update_service_capabilities /var/lib/kolla/venv/lib/python3.8/site-packages/cinder/scheduler/host_manager.py:575
    2022-11-26 17:23:15.752 8 DEBUG cinder.scheduler.host_manager [req-f39ed266-c6c4-415b-b5a8-2ec2170c5fc4 - - - - -]

In this case host manger will set provisioned_capacity_gb based on allocated_capacity_gb:

https://github.com/openstack/cinder/blob/master/cinder/scheduler/host_manager.py#L434

And, finally, provisioned_capacity_gb is used in capacity filter:

https://github.com/openstack/cinder/blob/master/cinder/scheduler/filters/capacity_filter.py#L148.

So, if you will have many thin volumes, that don't have much data on it free_capacity_gb will be sufficient to deploy additional volumes, but actual oversubscription will be 3 times higher, than calculated in filter. As a result - we will have 3 times more thin volumes than planned. So we lost oversubscription control. Over time thin volume on ceph became thick and we will get ceph exhausted much easy than with correct calculation of provisioned_capacity_gb (based on allocated_capacity_gb)