Scheduling is not even among multiple thin provisioning pools which have different sizes

Bug #1917293 reported by zhaoleilc
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cinder
Incomplete
Low
Unassigned

Bug Description

Description
===========
Scheduling is not even among multiple thin provisioning pools
which have different sizes. For example, there are two thin provisioning
pools. Pool0 has 10T capacity, Pool1 has 30T capacity, the max_over_subscription_ratio
of them are both 20. We assume that the provisioned_capacity_gb of Pool1
is 250T and the provisioned_capacity_gb of Pool0 is 0.

According to the formula in the cinder source code, the free capacity of Pool0
is 10*20-0=200T and the free capacity of Pool1 is 30*20-250=350T.
So it is clear for us to see that a new created volume is
sheduled to Pool1 instead of Pool0. However, Pool0 is supposed to sheduled
since it has bigger real capacity.

In a word, the sheduler tends to schedule the pool that has bigger size.

Steps to reproduce
==================
1. Provision two thin provisioning pools, and the sizes of
them have a gap. For example, Pool0 has 10T capacity, Pool1
has 30T capacity.
2. Gaurantee that the max_over_subscription_ratio
of them are both 20, and the provisioned_capacity_gb of Pool1
is 250T and the provisioned_capacity_gb of Pool0 is 0.
3. Create a new volume.
4. Observe pool location where the volume is sheduled.

Expected result
===============
The new volume is sheduled to the Pool0.

Actual result
=============
The new volume is sheduled to the Pool1 due to its bigger size.

Environment
===========
master branch of cinder

Codes
==============
# cinder\cinder\utils.py
def calculate_virtual_free_capacity(total_capacity,
                  free_capacity,
                  provisioned_capacity,
                  thin_provisioning_support,
                  max_over_subscription_ratio,
                  reserved_percentage,
                  thin):

    total = float(total_capacity)
    reserved = float(reserved_percentage) / 100

    if thin and thin_provisioning_support:
        free = (total * max_over_subscription_ratio
                - provisioned_capacity
                - math.floor(total * reserved))
    else:
        # Calculate how much free space is left after taking into
        # account the reserved space.
        free = free_capacity - math.floor(total * reserved)
    return free

zhaoleilc (zhaoleilc)
description: updated
tags: added: provisioning thin
tags: added: thin-provisioning
removed: provisioning thin
Changed in cinder:
status: New → Incomplete
Revision history for this message
Sofia Enriquez (lsofia-enriquez) wrote :

Hi zhaoleilc,
Regarding the last Cinder meeting[1] we need more information from you:

- Are you testing with Devstack or with a deployment that has 3 schedulers?

(a)Anything that happens with multiple schedulers but doesn't with a single one can be attributed to different in-memory data. It may be that the requests are going to different schedulers and each scheduler has a different in-memory data at the time of receiving the request.
(b) This will also depend on how you configure the capacity weigher.

Regards,Sofia
[1] http://eavesdrop.openstack.org/meetings/cinder/2021/cinder.2021-03-03-14.00.log.html

Changed in cinder:
importance: Undecided → Low
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.