total_capacity_gb, free_capacity_gb reported by the CephFS driver is incorrect

Bug #1890833 reported by Goutham Pacha Ravi
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Shared File Systems Service (Manila)
Fix Released
High
Tom Barron

Bug Description

Description
===========
The CephFS driver reads cluster statistics and reports "total_capacity_gb" and "free_capacity_gb" to the manila scheduler. The information it reads is in kilobytes, and it is supposed to convert this into gibibytes - however, the formula it uses to convert the information is incorrect.

Steps to reproduce
==================

A chronological list of steps which will help reproduce the issue you hit:
* Setup Manila with the CephFS driver, protocol doesn't matter, you could use devstack [1]
* Check Manila pools list for capacity information:

  For example:

  $ manila pool-list --detail | grep capacity
  | total_capacity_gb | 181354994073600 |
  | free_capacity_gb | 164756019216384 |

* Check capacity on the Ceph cluster

  $ ceph -s
  cluster:
    id: 2ce57bd8-3797-49d7-a6d9-877810c6184f
    health: HEALTH_WARN
            too few PGs per OSD (8 < min 30)

  services:
    mon: 3 daemons, quorum controller-1,controller-2,controller-0 (age 3w)
    mgr: controller-0(active, since 3w), standbys: controller-2, controller-1
    mds: cephfs:1 {0=controller-2=up:active} 2 up:standby
    osd: 15 osds: 15 up (since 3w), 15 in (since 3w)

  task status:
    scrub status:
        mds.controller-2: idle

  data:
    pools: 5 pools, 128 pgs
    objects: 26 objects, 3.1 KiB
    usage: 15 GiB used, 150 GiB / 165 GiB avail
    pgs: 128 active+clean

Expected result
===============
You expect the CephFS driver to report the total capacity as 165 GiB, and the free capacity as 150 GiB;

Actual result
=============
Manila's information is way off

Environment
===========
1. Exact version of OpenStack Manila you are running: main branch, but bug is reproducible all the way back to the initial driver submission in Newton

2. Which storage backend did you use?
   Ceph, ceph version 14.2.8-59.el8cp (53387608e81e6aa2487c952a604db06faa5b2cd0) nautilus (stable)

3. Which networking type did you use?
   N/A

Triage/RCA
==========

The driver's query [2] can be executed manually like this:

  >>> import rados
  >>> from oslo_utils import units
  >>> rados = rados.Rados(conffile='/etc/ceph/ceph.conf')
  >>> rados.connect()
  >>> rados.get_cluster_stats()
  {'kb': 172953600, 'kb_used': 15830016, 'kb_avail': 157123584, 'num_objects': 26

The driver's current conversion is:

  total_capacity_gb = stats['kb'] * units.Mi
  free_capacity_gb = stats['kb_avail'] * units.Mi

What it really needs to do is:

  >>> round(rados.get_cluster_stats()['kb']/units.Mi, 2) # total_capacity_gb
  164.94
  >>> round(rados.get_cluster_stats()['kb_avail']/units.Mi, 2) # free_capacity_gb
  149.84

[1] https://docs.openstack.org/manila/latest/contributor/development-environment-devstack.html#dhss-false-driver-handles-share-servers-false-mode
[2] https://opendev.org/openstack/manila/src/commit/2d7c46445396b0db780ba456ffb9284b177a6ae4/manila/share/drivers/cephfs/driver.py#L168-L171

Changed in manila:
importance: Undecided → High
assignee: nobody → Goutham Pacha Ravi (gouthamr)
milestone: none → victoria-3
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to manila (master)

Fix proposed to branch: master
Review: https://review.opendev.org/745402

Changed in manila:
status: New → In Progress
Changed in manila:
assignee: Goutham Pacha Ravi (gouthamr) → Tom Barron (tpb)
Changed in manila:
milestone: victoria-3 → victoria-rc1
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to manila (master)

Reviewed: https://review.opendev.org/745402
Committed: https://git.openstack.org/cgit/openstack/manila/commit/?id=22d6fe98a3f437709901fac4e4ec65fec414f7d0
Submitter: Zuul
Branch: master

commit 22d6fe98a3f437709901fac4e4ec65fec414f7d0
Author: Goutham Pacha Ravi <email address hidden>
Date: Fri Aug 7 12:38:05 2020 -0700

    Fix capacity calculations in the CephFS driver

    The driver inflated total and available capacity
    due to an incorrect calculation. The driver was
    also ignoring the configuration option
    "reserved_share_percentage" that allows
    deployers to set aside space from scheduling
    to prevent oversubscription.

    While this bugfix may have an upgrade impact,
    some things must be clarified:
    - Inflating the total, free space will allow
      manila to schedule workloads that may run
      out of space - this may cause end user
      downtime and frustration, because shares are
      created (empty subvolumes on ceph occupy no
      space) easily, but they could get throttled
      as they start to fill up.
    - CephFS shares are always thinly provisioned
      but, the driver does not support oversubscription
      via manila. So, real free space is what
      determines capacity based scheduler decisions.
      Users however expect share sizes to be honored,
      and manila will allow provisioning as long
      as there is free space on the cluster. This
      means that Ceph cluster administrators
      must manage oversubscription outside of manila
      to prevent misbehavior.

    Depends-On: Ic96b65d2caab788afca8bfc45575f3c05dc88008
    Change-Id: I6ab157d6d099fe910ec1d90193783b55053ce8f6
    Closes-Bug: #1890833
    Signed-off-by: Goutham Pacha Ravi <email address hidden>

Changed in manila:
status: In Progress → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.