charm needs a pg_num cap when deploying to avoid creating too many pgs

Bug #1492742 reported by Nobuto Murata
44
This bug affects 9 people
Affects Status Importance Assigned to Milestone
Charm Helpers
Invalid
Medium
Unassigned
ceph-radosgw (Juju Charms Collection)
Fix Released
Medium
Billy Olsen
cinder-ceph (Juju Charms Collection)
Fix Released
Medium
Billy Olsen
glance (Juju Charms Collection)
Fix Released
Medium
Billy Olsen
nova-compute (Juju Charms Collection)
Fix Released
Medium
Billy Olsen

Bug Description

As PG cannot be decreased, charms should create fewer PGs to be fail-safe.
http://ceph.com/docs/master/rados/operations/placement-groups/#set-the-number-of-placement-groups

$ sudo ceph status
    cluster 026593f1-93c3-4144-bb72-69c75c7eb3fd
     health HEALTH_WARN
            too many PGs per OSD (337 > max 300)
     monmap e2: 3 mons at {angha=10.230.123.3:6789/0,axex=10.230.123.5:6789/0,duwende=10.230.123.4:6789/0}
            election epoch 8, quorum 0,1,2 angha,duwende,axex
     osdmap e26: 5 osds: 5 up, 5 in
      pgmap v835: 562 pgs, 4 pools, 12270 MB data, 1568 objects
            36779 MB used, 2286 GB / 2322 GB avail
                 562 active+clean
  client io 26155 kB/s wr, 6 op/s

$ sudo ceph osd dump | grep pg_num
pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
pool 1 'nova' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 166 pgp_num 166 last_change 13 flags hashpspool stripe_width 0
pool 2 'glance' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 166 pgp_num 166 last_change 25 flags hashpspool stripe_width 0
pool 3 'cinder-ceph' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 166 pgp_num 166 last_change 20 flags hashpspool stripe_width 0

$ sudo ceph osd pool set glance pg_num 64
Error EEXIST: specified pg_num 64 <= current 166

Related branches

Nobuto Murata (nobuto)
tags: added: cpec
Revision history for this message
Nobuto Murata (nobuto) wrote :

This issue can be reproduced easily when nova, glance, cinder-ceph pools are created in one ceph cluster.

[hooks/charmhelpers/contrib/storage/linux/ceph.py]
def create_pool(service, name, replicas=3):
    """Create a new RADOS pool."""
    if pool_exists(service, name):
        log("Ceph pool {} already exists, skipping creation".format(name),
            level=WARNING)
        return

    # Calculate the number of placement groups based
    # on upstream recommended best practices.
    osds = get_osds(service)
    if osds:
        pgnum = (len(osds) * 100 // replicas)
    else:
        # NOTE(james-page): Default to 200 for older ceph versions
        # which don't support OSD query from cli
        pgnum = 200

    cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
    check_call(cmd)

Changed in cinder-ceph (Juju Charms Collection):
milestone: none → 16.01
Changed in glance (Juju Charms Collection):
milestone: none → 16.01
Changed in nova-compute (Juju Charms Collection):
milestone: none → 16.01
Revision history for this message
Edward Hope-Morley (hopem) wrote :

Nobuto, this is more a problem of not having enough OSDs to support the number of pools you need for your environment. As the charms deploy they are not aware of how many pool will be created, they only know what pools they individually create so they are not able to adjust their pg_num accordingly. If we were to cap the pg_num set in charms we would be setting a suboptimal value according to Ceph documentation which would work adversely for environments with fewer pools. Currently we use the following formula to calculate pg_num:

    pg_num = num_osds * 100 / replicas

So, I think the only option you have is to either add more OSDs to your ceph cluster or decrease the number of services that depend on Ceph.

Revision history for this message
Billy Olsen (billy-olsen) wrote :

This warning appears to be due to a change that went into the Hammer release of Ceph in commit https://github.com/ceph/ceph/commit/7f3dcdb089b4342e8624d37abfe5437db8a15c39. The default threshold for the mon_pg_warn_max_per_osd has since been tuned down to 300, however in this commit the warning is raised if there are more than 300 Placement Groups which are spread across the number of OSDs which are UP & IN the cluster.

For >= Hammer releases of Ceph, I think what Ed lays out is appropriate - increasing the number of OSDs available (which is more likely in a production cluster), increasing the number of replicas required, or decreasing the number of services which use Ceph.

Another option is to simply change the default threshold at which the warnings appear. Not necessarily ideal, but it is a tunable parameter. It's a monitor option and is settable via the following key:

mon pg warn max per osd = 500

For the record, if you undersize things and put fewer than 30 pgs per OSD in the cluster, you'll also get a warning.

Revision history for this message
Nobuto Murata (nobuto) wrote :

I understand that juju charm does not know how many pools to be created at the deployment timing, but knows how many OSDs in the cluster. How about creating pools with assuming 5 to 10 pools to be created (in typical OpenStack deployment) and targeting PGs per OSD as 100 (described in [1]) to be safe at the deployment timing instead of consuming all of recommended ratio with just 1 pool?

Indeed, PGs per OSD ratio won't be optimized just after the deployment. But after the deployment juju charms could know how many pools were created and how many OSDs are there. So we could have an action to optimized PGs per OSD to users' desired value for example:

$ juju action do ceph/0 optimize-pgnum target-pgs-per-osd=200

[1] http://ceph.com/pgcalc/

James Page (james-page)
Changed in cinder-ceph (Juju Charms Collection):
importance: Undecided → Medium
Changed in nova-compute (Juju Charms Collection):
importance: Undecided → Medium
Changed in glance (Juju Charms Collection):
importance: Undecided → Medium
Changed in charm-helpers:
importance: Undecided → Medium
status: New → Invalid
Changed in cinder-ceph (Juju Charms Collection):
status: New → Triaged
Changed in glance (Juju Charms Collection):
status: New → Triaged
Changed in nova-compute (Juju Charms Collection):
status: New → Triaged
James Page (james-page)
Changed in glance (Juju Charms Collection):
milestone: 16.01 → 16.04
Changed in nova-compute (Juju Charms Collection):
milestone: 16.01 → 16.04
Changed in cinder-ceph (Juju Charms Collection):
milestone: 16.01 → 16.04
Mick Gregg (macgreagoir)
tags: added: canonical-bootstack
James Page (james-page)
Changed in glance (Juju Charms Collection):
milestone: 16.04 → 16.07
Changed in nova-compute (Juju Charms Collection):
milestone: 16.04 → 16.07
Changed in cinder-ceph (Juju Charms Collection):
milestone: 16.04 → 16.07
Revision history for this message
William (william-forsyth) wrote :

I deployed the openstack-lxd bundle and encountered this error - In order to fix it, I redeployed a bundle with 6 OSDs and the number of PGs rose even higher -

HEALTH_ERR 8968 pgs are stuck inactive for more than 300 seconds; 8968 pgs stuck inactive; too many PGs per OSD (1494 > max 300)

Revision history for this message
Brian Collins (bcollins-b) wrote :

I have a clean deployment of openstack using autopilot. I've deployed using ceph for both object and block storage, and this bug rears its head. I have a 3 storage node stack with 2 usable ceph drives per node. I'm curious on how might I temporarily get around this error by increase the max PG Number in which the warning is flagged. It's mentioned by in post #3, however, I'm unable to set it using juju. Placing it directly in the ceph.conf doesn't work as it won't persist.

'mon pg warn max per osd = 500'

ubuntu@juju-machine-0-lxc-1:~$ sudo ceph status
sudo: unable to resolve host juju-machine-0-lxc-1
    cluster 6639db42-5c26-45b8-b658-ed4193b01c88
     health HEALTH_WARN
            too many PGs per OSD (462 > max 300)
     monmap e2: 3 mons at {juju-machine-0-lxc-1=10.14.0.77:6789/0,juju-machine-1-lxc-1=10.14.0.69:6789/0,juju-machine-2-lxc-3=10.14.0.54:6789/0}
            election epoch 8, quorum 0,1,2 juju-machine-2-lxc-3,juju-machine-1-lxc-1,juju-machine-0-lxc-1
     osdmap e58: 6 osds: 6 up, 6 in
      pgmap v108: 924 pgs, 14 pools, 500 MB data, 118 objects
            3012 MB used, 2501 GB / 2504 GB avail
                 924 active+clean

Revision history for this message
Edward Hope-Morley (hopem) wrote :

Hi Brian, bumping the max pg limit is not ideal since it is masking what could be a real problem. Having too many PGs can result in performance degradation due to the extra system resources consumed by Ceph in order to manage those PGs. Unfortunately Ceph does not allow you do reduce the number of PGs in a pool so adding more OSDs is the only real solutions here. Having said that I think we should add support to the charm for defining a capped/max number of PGs that will be assigned at deploy time and leave adjustment to post-deploy time (perhaps using an action to tune the PG count of pools).

summary: - too many PGs per OSD
+ charm needs a pg_num cap when deploying to avoid creating too many pgs
tags: added: openstack sts
Revision history for this message
Brian Collins (bcollins-b) wrote :

Makes sense, I read the ceph documentation detailing the issues with changing the max pg limit. However, with it being a key / value accepted by ceph.conf I think it should be able to be altered, even temporarily. At one point the max pg was set to 500, so a tweak to 350 doesn't seem overly outrageous.

Obviously tuning would be the prefered method.

At the moment, I can't deploy openstack using autopilot because of the limit. I've added another physical node with 3 additional drives and deployed autopilot was the same issue. It is not till now that I realised increasing the nodes, increased the pool groups. So while i'm closer to the max pg limit, i'm still over.

ubuntu@juju-machine-1-lxc-2:~$ sudo ceph status
sudo: unable to resolve host juju-machine-1-lxc-2
    cluster 38fdd52e-3995-4ce8-977f-285b4b685378
     health HEALTH_WARN
            too many PGs per OSD (337 > max 300)
     monmap e2: 3 mons at {juju-machine-1-lxc-2=10.14.0.44:6789/0,juju-machine-2-lxc-0=10.14.0.46:6789/0,juju-machine-3-lxc-2=10.14.0.41:6789/0}
            election epoch 6, quorum 0,1,2 juju-machine-3-lxc-2,juju-machine-1-lxc-2,juju-machine-2-lxc-0
     osdmap e78: 10 osds: 10 up, 10 in
      pgmap v149: 1124 pgs, 14 pools, 500 MB data, 119 objects
            1887 MB used, 3470 GB / 3472 GB avail
                1124 active+clean

tags: added: kanban-cross-team
tags: removed: kanban-cross-team
Revision history for this message
Andreas Hasenack (ahasenack) wrote :

Here are some numbers of when I tried increasing the number of OSDs in a deployment I had:

WIth 3 OSDs:
     health HEALTH_WARN
            too many PGs per OSD (865 > max 300)

With 6 OSDs (I added 3 nodes to the existing deployment):
     health HEALTH_WARN
            too many PGs per OSD (432 > max 300)

Revision history for this message
Brian Collins (bcollins-b) wrote :

As explained and expected, adding OSD's reduced the placement groups to level below the threshold.

ubuntu@juju-machine-0-lxc-4:~$ sudo ceph status
    cluster eabab000-231c-4edb-855c-dd30b1bab56d
     health HEALTH_OK
     monmap e2: 3 mons at {juju-machine-0-lxc-4=10.14.0.53:6789/0,juju-machine-1-lxc-0=10.14.0.62:6789/0,juju-machine-2-lxc-4=10.14.0.60:6789/0}
            election epoch 12, quorum 0,1,2 juju-machine-0-lxc-4,juju-machine-2-lxc-4,juju-machine-1-lxc-0
     osdmap e356: 13 osds: 13 up, 13 in
      pgmap v133686: 1091 pgs, 14 pools, 101133 MB data, 26747 objects
            278 GB used, 3735 GB / 4014 GB avail
                1091 active+clean

Revision history for this message
Edward Hope-Morley (hopem) wrote :

So my current thinking on this is that we could have each client optionally provide a max_pg_num that will override whatever the charm would have used by inference. That way each client can specify a different value (e.g. for Nova, Cinder or Glance pools). We could accompany this with an action to optimise pg_num on pools that can be called once the deployment has finished.

Revision history for this message
Billy Olsen (billy-olsen) wrote :

I don't believe that the right finding is to specify a max_pg_num. After further research into this bug and the issue, the Ceph documentation regarding number of placement groups is wholly misleading per this mailing thread http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-August/003664.html. According to this, the information given at the top of the page for calculating placement groups is applicable to the whole cluster not a specific pool.

The current calculation mechanism in place today will nearly *always* result in this warning displaying. For example, let's consider cinder-ceph, glance, and nova-compute being related to a ceph cluster with 3 OSDs. Ceph by default creates the rbd pool w/ 64 pg and a size of 3 for 192 placement group pieces throughout the cluster for the rbd pool. Next we add the cinder-ceph, glance and nova-compute relations. Since there are 3 OSDs, the charm chooses 128 for the pg_num x 3 replicas x 3 pools = 1344 pg references if you will / 3 OSDs = 448 PGs/OSD (or 320 PGs/OSD if one forgets to specify the libvirt-image-backend = rbd for the nova-compute charm).

So even in this base case, the deployment will result in the warning of too many PGs per OSD.

The larger case of say 32 OSDs becomes even worse. The ceph-mon charm selects the 4096 for the pg_num since there are between 10 and 50 OSDs... so we get 192 pgs for the rbd pool, then 4096*3*3 for the cinder, glance, and nova pools = 37056 PGs / 32 OSDs = 1158 PGs/OSD! Way above the threshold. In this case, adding more OSDs helps with the pool balance but the best it can do is 37,056 PGs / 50 = 741 PGs/OSD.

A work-around to avoid this would be to deploy a small number of OSDs to form a small cluster, then add the relations which will choose fairly small amounts of PGs. After the pools are created, add the remaining OSDs into the cluster.

To fix this properly, make each charm indicate to the ceph-mon the percentage of available space the newly created pool is expected to use. Then the ceph-mon can apply a more appropriate calculation for the pg_num. Reasonable defaults can be made regarding the pool contents (e.g. glance = 5-10%, nova-rbd = 30 - 40%, cinder-ceph = 40%, etc). These values should naturally be tunable so a cluster can be formed which is more volume-heavy or radosgw-heavy, etc.

Revision history for this message
Edward Hope-Morley (hopem) wrote :

I think a percentage could work as an alternative to a cap but, in the case were you deploy your entire cluster in one shot, you will still have the problem that no one charm will have a reliable view of total amount of storage ultimately available in the cluster. Therefore, you will still need to perform some post-deployment config to optimise the number of placement groups in your pools. Nevertheless that is still a better proposition that what we have right now since you can always go up but not down (pg_num). The only benefit of a fixed rather than percentage cap is that the deployer knows how much storage they are adding so they could set the cap appropriately (although maybe "cap" is a misleading name in this case).

Revision history for this message
Chris MacNaughton (chris.macnaughton) wrote :

We've been thinking about this and discussing possible solutions. One of the leading ones right now is that we should create any pools with an absolutely minimal number of PGs and, maybe in the update-status hook, check if Ceph is claiming too FEW placement groups, and bump them then.

Revision history for this message
Billy Olsen (billy-olsen) wrote : Re: [Bug 1492742] Re: charm needs a pg_num cap when deploying to avoid creating too many pgs
Download full text (3.3 KiB)

> We've been thinking about this and discussing possible solutions. One of
> the leading ones right now is that we should create any pools with an
> absolutely minimal number of PGs and, maybe in the update-status hook,
> check if Ceph is claiming too FEW placement groups, and bump them then.

I'm -1 on this idea as I don't think an update-status hook should be
actively changing the pools' pgs as that could cause data shuffle in
the cluster not initiated by a user request. Its difficult enough to
track what Ceph is doing with your data without the charms seemingly
randomly changing pool pg_nums.

It might be better to add an option to the ceph-mon which tells
indicates the minimum number of OSDs to consider when doing pg_num
calculations. This allows the CephBroker to make a more intelligent
choice for the pg_num when combined with a usage percentage for the
pool. This allows for the single-shot deployments to calculate based
on the number of OSDs to expect while still providing an appropriate
number of placement groups for a pool.

I'll pitch up a more concrete proposal to move this forward unless
there are strong objections to my approach.

>
>
> --
> You received this bug notification because you are a member of OpenStack
> Charmers, which is subscribed to glance in Juju Charms Collection.
> https://bugs.launchpad.net/bugs/1492742
>
> Title:
> charm needs a pg_num cap when deploying to avoid creating too many pgs
>
> Status in Charm Helpers:
> Invalid
> Status in cinder-ceph package in Juju Charms Collection:
> Triaged
> Status in glance package in Juju Charms Collection:
> Triaged
> Status in nova-compute package in Juju Charms Collection:
> Triaged
>
> Bug description:
> As PG cannot be decreased, charms should create fewer PGs to be fail-safe.
> http://ceph.com/docs/master/rados/operations/placement-groups/#set-the-number-of-placement-groups
>
> $ sudo ceph status
> cluster 026593f1-93c3-4144-bb72-69c75c7eb3fd
> health HEALTH_WARN
> too many PGs per OSD (337 > max 300)
> monmap e2: 3 mons at {angha=10.230.123.3:6789/0,axex=10.230.123.5:6789/0,duwende=10.230.123.4:6789/0}
> election epoch 8, quorum 0,1,2 angha,duwende,axex
> osdmap e26: 5 osds: 5 up, 5 in
> pgmap v835: 562 pgs, 4 pools, 12270 MB data, 1568 objects
> 36779 MB used, 2286 GB / 2322 GB avail
> 562 active+clean
> client io 26155 kB/s wr, 6 op/s
>
> $ sudo ceph osd dump | grep pg_num
> pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
> pool 1 'nova' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 166 pgp_num 166 last_change 13 flags hashpspool stripe_width 0
> pool 2 'glance' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 166 pgp_num 166 last_change 25 flags hashpspool stripe_width 0
> pool 3 'cinder-ceph' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 166 pgp_num 166 last_change 20 flags hashpspool stripe_width 0
>
> $ sudo ceph osd pool set glance pg_num 64
> Error EEXIST: spe...

Read more...

Changed in cinder-ceph (Juju Charms Collection):
assignee: nobody → Billy Olsen (billy-olsen)
Changed in glance (Juju Charms Collection):
assignee: nobody → Billy Olsen (billy-olsen)
Changed in nova-compute (Juju Charms Collection):
assignee: nobody → Billy Olsen (billy-olsen)
Changed in ceph-radosgw (Juju Charms Collection):
importance: Undecided → Medium
assignee: nobody → Billy Olsen (billy-olsen)
milestone: none → 16.07
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-ceph-radosgw (master)

Fix proposed to branch: master
Review: https://review.openstack.org/335222

Changed in ceph-radosgw (Juju Charms Collection):
status: New → In Progress
Changed in cinder-ceph (Juju Charms Collection):
status: Triaged → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-cinder-ceph (master)

Fix proposed to branch: master
Review: https://review.openstack.org/335223

Changed in glance (Juju Charms Collection):
status: Triaged → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-glance (master)

Fix proposed to branch: master
Review: https://review.openstack.org/335224

Changed in nova-compute (Juju Charms Collection):
status: Triaged → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-nova-compute (master)

Fix proposed to branch: master
Review: https://review.openstack.org/335225

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-cinder-ceph (master)

Reviewed: https://review.openstack.org/335223
Committed: https://git.openstack.org/cgit/openstack/charm-cinder-ceph/commit/?id=2e5c583a93c4f9c98c3810607247fe45170af52d
Submitter: Jenkins
Branch: master

commit 2e5c583a93c4f9c98c3810607247fe45170af52d
Author: Billy Olsen <email address hidden>
Date: Tue Jun 28 12:07:49 2016 -0700

    Add ceph-pool-weight option for calculating pgs

    Provide the weight option to the Ceph broker request API for requesting
    the creation of a new Ceph storage pool. The weight is used to indicate
    the percentage of the data that the pool is expected to consume. Each
    environment may have slightly different needs based on the type of
    workload so a config option labelled ceph-pool-weight is provided to
    allow the operator to tune this value.

    Closes-Bug: #1492742

    Change-Id: I844353dc8b354751de1af5d30b6d512712d40a62

Changed in cinder-ceph (Juju Charms Collection):
status: In Progress → Fix Committed
Changed in nova-compute (Juju Charms Collection):
status: In Progress → Fix Committed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-nova-compute (master)

Reviewed: https://review.openstack.org/335225
Committed: https://git.openstack.org/cgit/openstack/charm-nova-compute/commit/?id=12396779b55f2884ae1910a1b31a11dbdcd5ebe5
Submitter: Jenkins
Branch: master

commit 12396779b55f2884ae1910a1b31a11dbdcd5ebe5
Author: Billy Olsen <email address hidden>
Date: Tue Jun 28 12:55:07 2016 -0700

    Add ceph-pool-weight option for calculating pgs

    Provide the weight option to the Ceph broker request API for requesting
    the creation of a new Ceph storage pool. The weight is used to indicate
    the percentage of the data that the pool is expected to consume. Each
    environment may have slightly different needs based on the type of
    workload so a config option labelled ceph-pool-weight is provided to
    allow the operator to tune this value.

    Closes-Bug: #1492742

    Change-Id: Ia9aba8c4dee7a94c36c282273a356d9c13df7f75

Changed in glance (Juju Charms Collection):
status: In Progress → Fix Committed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-glance (master)

Reviewed: https://review.openstack.org/335224
Committed: https://git.openstack.org/cgit/openstack/charm-glance/commit/?id=c837863cf21b252699e3d9a47ae1c257c8b94bad
Submitter: Jenkins
Branch: master

commit c837863cf21b252699e3d9a47ae1c257c8b94bad
Author: Billy Olsen <email address hidden>
Date: Tue Jun 28 12:39:22 2016 -0700

    Add ceph-pool-weight option for calculating pgs

    Provide the weight option to the Ceph broker request API for requesting
    the creation of a new Ceph storage pool. The weight is used to indicate
    the percentage of the data that the pool is expected to consume. Each
    environment may have slightly different needs based on the type of
    workload so a config option labelled ceph-pool-weight is provided to
    allow the operator to tune this value.

    Closes-Bug: #1492742

    Change-Id: I56c7de4d9213fe85ce89cbad957291b438f6f92f

Changed in ceph-radosgw (Juju Charms Collection):
status: In Progress → Fix Committed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-ceph-radosgw (master)

Reviewed: https://review.openstack.org/335222
Committed: https://git.openstack.org/cgit/openstack/charm-ceph-radosgw/commit/?id=bb1499e4e1b1e5255da9eb83af71f4b8871e987d
Submitter: Jenkins
Branch: master

commit bb1499e4e1b1e5255da9eb83af71f4b8871e987d
Author: Billy Olsen <email address hidden>
Date: Tue Jun 28 13:43:42 2016 -0700

    Add rgw-buckets-pool-weight option for calculating pgs

    Provide the weight option to the Ceph broker request API for requesting
    the creation of a new Ceph storage pool. The weight is used to indicate
    the percentage of the data that the pool is expected to consume. Each
    environment may have slightly different needs based on the type of
    workload so a config option labelled rgw-buckets-pool-weight is provided
    to allow the operator to tune this value.

    Closes-Bug: #1492742

    Change-Id: I15ae3b853fa3379a9de2ddde3e55dc242a4d4ab2

Liam Young (gnuoy)
Changed in glance (Juju Charms Collection):
status: Fix Committed → Fix Released
Changed in nova-compute (Juju Charms Collection):
status: Fix Committed → Fix Released
Changed in cinder-ceph (Juju Charms Collection):
status: Fix Committed → Fix Released
Changed in ceph-radosgw (Juju Charms Collection):
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.