Cinder volume starting for very long time
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Invalid
|
Undecided
|
Unassigned | ||
kolla-ansible |
New
|
Undecided
|
Unassigned |
Bug Description
Hi,
We have victoria deployment where when we restart cinder-volume with rbd backends it takes 25 minutes to start cinder-volume. This is caused by cinder-volume which is asking ceph via rbd "how big is this volume". In our case we have 50 000 volumes in ceph.
If rbd backend (pool) in ceph is used only for openstack, there is no reason to ask ceph for size for each volume and better is to let cinder trust his DB.
Because of above cinder proposed configration option below :
# Set to False if the pool is shared with other usages. On exclusive use driver
# won't query images' provisioned size as they will match the value calculated
# by the Cinder core code for allocated_
# the Ceph cluster as well as on the volume service. On non exclusive use
# driver will query the Ceph cluster for per image used disk, this is an
# intensive operation having an independent request for each image. (boolean
# value)
#rbd_exclusive_
This patch is merged in master and backported until rocky (and higher).
https:/
We also found that we are using older stable/victoria for cinder image (default rbd_exclusive_
But this should be definitively configurable via kolla-ansible as other rbd_* options, moreover if this value was changed in mid cycle.
Changed in cinder: | |
status: | New → Invalid |
This configuration option was made the default and backported to the Victoria release in May of last year.
https:/ /review. opendev. org/q/I839441a7 1238cdad540ba8d 9d4d18b1f0fa3ee 9d
I'm not sure there's much that Cinder can do here, maybe an update to the latest version, or filing a bug with kolla-ansible.