ceph with multiple OSD pools fails to upgrade osds

Bug #1788722 reported by Drew Freiberger
14
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Ceph OSD Charm
Fix Released
High
James Page
charms.ceph
Fix Released
High
Chris MacNaughton

Bug Description

ceph-osd charm (17.11 but appears may be still present in 18.05) after doing an agent upgrade (juju 1 to juju 2 upgrade) forces OSD cluster upgrades.

Error is:

Traceback (most recent call last):
  File "hooks/config-changed", line 549, in <module>
    hooks.execute(sys.argv)
  File "/var/lib/juju/agents/unit-ceph-osd-0/charm/hooks/charmhelpers/core/hookenv.py", line 768, in execute
    self._hooks[hook_name]()
  File "/var/lib/juju/agents/unit-ceph-osd-0/charm/hooks/charmhelpers/contrib/hardening/harden.py", line 79, in _harden_inner2
    return f(*args, **kwargs)
  File "hooks/config-changed", line 342, in config_changed
    check_for_upgrade()
  File "hooks/config-changed", line 116, in check_for_upgrade
    upgrade_key='osd-upgrade')
  File "lib/ceph/utils.py", line 1801, in roll_osd_cluster
    osd_sorted_list[position - 1].name))
TypeError: unsupported operand type(s) for -: 'NoneType' and 'int'

This is line: https://github.com/openstack/charm-ceph-osd/blob/master/lib/ceph/utils.py#L2254

I'm finding that when running get_osd_tree, the parser stops when you get to a second root entry and then members of the second pool of OSDs get this error because their hostnames don't match the first pool of OSD hostnames.

Here's my OSD tree:

ubuntu@juju-machine-2-lxc-0:~$ sudo ceph osd tree
sudo: unable to resolve host juju-machine-2-lxc-0
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
 -9 9.00000 root ssds
-10 3.00000 host OS-CS-10
 52 1.00000 osd.52 up 1.00000 1.00000
 56 1.00000 osd.56 up 1.00000 1.00000
 58 1.00000 osd.58 up 1.00000 1.00000
-12 3.00000 host OS-CS-09
 53 1.00000 osd.53 up 1.00000 1.00000
 55 1.00000 osd.55 up 1.00000 1.00000
 59 1.00000 osd.59 up 1.00000 1.00000
-11 3.00000 host OS-CS-08
 51 1.00000 osd.51 up 1.00000 1.00000
 54 1.00000 osd.54 up 1.00000 1.00000
 57 1.00000 osd.57 up 1.00000 1.00000
 -1 39.20966 root default
 -2 4.35394 host OS-CS-05
  0 0.54399 osd.0 up 1.00000 1.00000
  2 0.54399 osd.2 up 1.00000 1.00000
  4 0.54399 osd.4 up 1.00000 1.00000
  7 0.54399 osd.7 up 1.00000 1.00000
 11 0.54399 osd.11 up 1.00000 1.00000
 16 0.54399 osd.16 up 1.00000 1.00000
 33 1.09000 osd.33 up 1.00000 1.00000
 -3 4.35394 host OS-CS-02
  1 0.54399 osd.1 up 1.00000 1.00000
  3 0.54399 osd.3 up 1.00000 1.00000
  6 0.54399 osd.6 up 1.00000 1.00000
 10 0.54399 osd.10 up 1.00000 1.00000
 15 0.54399 osd.15 up 1.00000 1.00000
 20 0.54399 osd.20 up 1.00000 1.00000
 35 1.09000 osd.35 up 1.00000 1.00000
 -4 4.35394 host OS-CS-03
  5 0.54399 osd.5 up 1.00000 1.00000
  9 0.54399 osd.9 up 1.00000 1.00000
 13 0.54399 osd.13 up 1.00000 1.00000
 18 0.54399 osd.18 up 1.00000 1.00000
 22 0.54399 osd.22 up 1.00000 1.00000
 25 0.54399 osd.25 up 1.00000 1.00000
 34 1.09000 osd.34 up 1.00000 1.00000
 -5 4.35394 host OS-CS-01
  8 0.54399 osd.8 up 1.00000 1.00000
 12 0.54399 osd.12 up 1.00000 1.00000
 17 0.54399 osd.17 up 1.00000 1.00000
 21 0.54399 osd.21 up 1.00000 1.00000
 24 0.54399 osd.24 up 1.00000 1.00000
 27 0.54399 osd.27 up 1.00000 1.00000
 31 1.09000 osd.31 up 1.00000 1.00000

This error is occuring on OS-CS-02.

When I put in debug statements inside get_upgrade_position(), it only iterates through the three OS-CS-08/ 09/ 10 hosts to check for upgrade order.

Revision history for this message
Drew Freiberger (afreiberger) wrote :

Marking Field-Critical. Currently live issue in production. Working on a patch, but want to get more eyes on this for bulletproof solution.

Revision history for this message
Drew Freiberger (afreiberger) wrote :

https://github.com/openstack/charm-ceph-osd/blob/master/lib/ceph/utils.py#L582

 child_ids = json_tree['nodes'][0]['children']

This is the root cause.

the ['nodes'][0] denotes look at the children of the first entry in the osd map which is typically the root OSD pool that has all hosts. however, with multiple pools, we need to find all node members that have children to list hosts in all pools.

Revision history for this message
Drew Freiberger (afreiberger) wrote :

I've submitted the following which was tested to bypass the issue found above.

https://review.openstack.org/#/c/595914/

What I don't know is whether there are other bits in ceph-mon that need to be properly mended to account for this as well in populating the "I'm done with my upgrade" state db.

After patching this node, I started getting spammed with:

Error ENOENT: error obtaining 'osd_OS-CS-01_None_start': (2) No such file or directory
Error ENOENT: error obtaining 'osd_OS-CS-01_None_start': (2) No such file or directory
Error ENOENT: error obtaining 'osd_OS-CS-01_None_start': (2) No such file or directory
Error ENOENT: error obtaining 'osd_OS-CS-01_None_start': (2) No such file or directory
Error ENOENT: error obtaining 'osd_OS-CS-01_None_start': (2) No such file or directory
Error ENOENT: error obtaining 'osd_OS-CS-01_None_start': (2) No such file or directory
Error ENOENT: error obtaining 'osd_OS-CS-01_None_start': (2) No such file or directory
Error ENOENT: error obtaining 'osd_OS-CS-01_None_start': (2) No such file or directory
Error ENOENT: error obtaining 'osd_OS-CS-01_None_start': (2) No such file or directory
Error ENOENT: error obtaining 'osd_OS-CS-01_None_start': (2) No such file or directory
Error ENOENT: error obtaining 'osd_OS-CS-01_None_start': (2) No such file or directory
Error ENOENT: error obtaining 'osd_OS-CS-01_None_start': (2) No such file or directory

when running config-changed.

this was coming from stderr in:

  File "lib/ceph/utils.py", line 1716, in wait_on_previous_node
    log("{} is not finished. Waiting".format(previous_node))

as witnessed from a traceback from my control-c of the test.

I'm assuming that this is just because the file wasn't dropped by the other unit failing the same task, but I don't know.

Revision history for this message
Chris MacNaughton (chris.macnaughton) wrote :

There is a proposed fix against charms.ceph at https://review.openstack.org/#/c/596312/

Changed in charm-ceph-osd:
importance: Undecided → High
status: New → In Progress
assignee: nobody → Chris MacNaughton (chris.macnaughton)
Changed in charms.ceph:
status: New → In Progress
importance: Undecided → High
assignee: nobody → Chris MacNaughton (chris.macnaughton)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-ceph-osd (master)

Fix proposed to branch: master
Review: https://review.openstack.org/596317

Revision history for this message
Ryan Beisner (1chb1n) wrote :

While we do have this underway, we do need the usual details. Please attach a sanitized bundle, and ideally a juju crash dump. We ask for those on all SLA bugs. Thank you.

Changed in charm-ceph-osd:
status: In Progress → Incomplete
Changed in charms.ceph:
status: In Progress → Incomplete
Revision history for this message
Chris MacNaughton (chris.macnaughton) wrote :

I don't understand how the juju agent version change would get past the check at https://github.com/openstack/charm-ceph-osd/blob/master/hooks/ceph_hooks.py#L126-L144 and into the upgrade paths

Revision history for this message
Ryan Beisner (1chb1n) wrote :

We're not able to determine from the bug what version of Ceph or Ubuntu are in play here. Also, were there other upgrade or update operations done just prior to or after the juju 1->2 upgrade?

Revision history for this message
Ryan Beisner (1chb1n) wrote :
Revision history for this message
David Ames (thedac) wrote :

We are also trying to figure out why an upgrade took place.

https://github.com/openstack/charm-ceph-osd/blob/stable/17.11/hooks/ceph_hooks.py#L85
Lines 102 and 106 would have stopped that from occurring unless ceph itself had a major version upgrade.

Is it possible that ceph was upgraded as well? What are the versions of ceph involved before and after?

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charms.ceph (master)

Reviewed: https://review.openstack.org/596312
Committed: https://git.openstack.org/cgit/openstack/charms.ceph/commit/?id=a97120757c31a3aa9715d37da179ae5c39a73064
Submitter: Zuul
Branch: master

commit a97120757c31a3aa9715d37da179ae5c39a73064
Author: Chris MacNaughton <email address hidden>
Date: Fri Aug 24 16:12:20 2018 +0200

    Ensure that we get all OSDs in get_osd_tree

    This patch looks for multiple nodes in the OSD tree with type root and
    iterates through all root parent node children to allow for upgrading ceph-osd
    cluster/devices when running both a "default" and an "ssd" pool of OSD hosts,
    for instance.

    Closes-Bug: #1788722
    Change-Id: I69d653f9f3ea4ee8469f3d7323ee68435ba22099

Changed in charms.ceph:
status: Incomplete → Fix Released
Revision history for this message
Drew Freiberger (afreiberger) wrote :

The bundle has been attached to the related support case, and the juju crashdump is uploading to BrickFTP now.

Revision history for this message
Drew Freiberger (afreiberger) wrote :

I'm going to guess we had a charm upgrade that was stuck/hooks not running on a unit and didn't know about it until it had unwedged under the new model. hopefully the logs in the juju crashdump help determine the nature of the triggered upgrade, as this was the only unit that had that happen of the 7 that should have exhibited this behavior. There were no other upgrades anytime in the last few months on this site beyond addition of new ceph-osd nodes which got moved under the 'ssds' pool. Notes left in associated support case with reference to that work.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-ceph-osd (master)

Reviewed: https://review.openstack.org/596317
Committed: https://git.openstack.org/cgit/openstack/charm-ceph-osd/commit/?id=0cb16d8c66680a36166c4a97062ebf2bfbdad192
Submitter: Zuul
Branch: master

commit 0cb16d8c66680a36166c4a97062ebf2bfbdad192
Author: Chris MacNaughton <email address hidden>
Date: Fri Aug 24 16:32:47 2018 +0200

    Sync in charms.ceph

    This patch looks for multiple nodes in the OSD tree with type root and
    iterates through all root parent node children to allow for upgrading ceph-osd
    cluster/devices when running both a default and an ssd pool of OSD hosts,
    for instance.

    Change-Id: Iea9812ee7ac67f9b45a6b38c43c130353e68ad8f
    Closes-Bug: #1788722
    Depends-On: I69d653f9f3ea4ee8469f3d7323ee68435ba22099

Changed in charm-ceph-osd:
status: Incomplete → Fix Committed
David Ames (thedac)
Changed in charm-ceph-osd:
milestone: none → 18.08
Changed in charm-ceph-osd:
status: Fix Committed → Fix Released
Revision history for this message
Bruno Carvalho (brunowcs) wrote :

Hi, I`m had same problem with upgrade Ubuntu 16.04 jewel -> luminous ceph-osd.

My controller version: 2.3.9

I´m get last version with command:

# charm pull ceph-osd

Only ceph-mon working for luminous after command:

# juju config ceph-mon source=cloud:xenial-pike

Upgrade ceph-osd not work after command:

# juju config ceph-osd source=cloud:xenial-pike

output log:

# tail -f /var/log/juju/unit-ceph-osd-3.log

2018-10-08 21:49:04 INFO juju-log roll_osd_cluster called with luminous
2018-10-08 21:49:05 DEBUG worker.uniter.jujuc server.go:178 running hook tool "juju-log"
2018-10-08 21:49:05 INFO juju-log osd_sorted_list: [<ceph.utils.CrushLocation object at 0x7feab298d9e8>, <ceph.utils.CrushLocation object at 0x7feab298dd68>]
2018-10-08 21:49:05 DEBUG worker.uniter.jujuc server.go:178 running hook tool "juju-log"
2018-10-08 21:49:05 INFO juju-log upgrade position: None
2018-10-08 21:49:05 DEBUG config-changed Traceback (most recent call last):
2018-10-08 21:49:05 DEBUG config-changed File "/var/lib/juju/agents/unit-ceph-osd-3/charm/hooks/config-changed", line 704, in <module>
2018-10-08 21:49:05 DEBUG config-changed hooks.execute(sys.argv)
2018-10-08 21:49:05 DEBUG config-changed File "/var/lib/juju/agents/unit-ceph-osd-3/charm/hooks/charmhelpers/core/hookenv.py", line 847, in execute
2018-10-08 21:49:05 DEBUG config-changed self._hooks[hook_name]()
2018-10-08 21:49:05 DEBUG config-changed File "/var/lib/juju/agents/unit-ceph-osd-3/charm/hooks/charmhelpers/contrib/hardening/harden.py", line 79, in _harden_inner2
2018-10-08 21:49:05 DEBUG config-changed return f(*args, **kwargs)
2018-10-08 21:49:05 DEBUG config-changed File "/var/lib/juju/agents/unit-ceph-osd-3/charm/hooks/config-changed", line 408, in config_changed
2018-10-08 21:49:05 DEBUG config-changed check_for_upgrade()
2018-10-08 21:49:05 DEBUG config-changed File "/var/lib/juju/agents/unit-ceph-osd-3/charm/hooks/config-changed", line 137, in check_for_upgrade
2018-10-08 21:49:05 DEBUG config-changed upgrade_key='osd-upgrade')
2018-10-08 21:49:05 DEBUG config-changed File "lib/ceph/utils.py", line 2263, in roll_osd_cluster
2018-10-08 21:49:05 DEBUG config-changed osd_sorted_list[position - 1].name))
2018-10-08 21:49:05 DEBUG config-changed TypeError: unsupported operand type(s) for -: 'NoneType' and 'int'
2018-10-08 21:49:05 ERROR juju.worker.uniter.operation runhook.go:113 hook "config-changed" failed: exit status 1
2018-10-08 21:49:05 DEBUG juju.machinelock machinelock.go:180 machine lock released for uniter (run config-changed hook)
2018-10-08 21:49:05 DEBUG juju.worker.uniter.operation executor.go:83 lock released
2018-10-08 21:49:05 INFO juju.worker.uniter resolver.go:100 awaiting error resolution for "config-changed" hook
2018-10-08 21:49:05 DEBUG juju.worker.uniter agent.go:17 [AGENT-STATUS] error: hook failed: "config-changed"

Revision history for this message
Bruno Carvalho (brunowcs) wrote :
Download full text (5.2 KiB)

This problem can be related my crushmap:

ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 82.63184 root default
-2 65.47192 rack sata
-4 32.73596 host uat-l-stor-11
 0 5.45599 osd.0 up 1.00000 1.00000
 1 5.45599 osd.1 up 1.00000 1.00000
 2 5.45599 osd.2 up 1.00000 1.00000
 3 5.45599 osd.3 up 1.00000 1.00000
 4 5.45599 osd.4 up 1.00000 1.00000
 5 5.45599 osd.5 up 1.00000 1.00000
-5 32.73596 host uat-l-stor-12
 6 5.45599 osd.6 up 1.00000 1.00000
 7 5.45599 osd.7 up 1.00000 1.00000
 8 5.45599 osd.8 up 1.00000 1.00000
 9 5.45599 osd.9 up 1.00000 1.00000
10 5.45599 osd.10 up 1.00000 1.00000
11 5.45599 osd.11 up 1.00000 1.00000
-3 17.15991 rack ssd
-6 8.57996 host uat-l-stor-13
12 1.42999 osd.12 up 1.00000 1.00000
13 1.42999 osd.13 up 1.00000 1.00000
14 1.42999 osd.14 up 1.00000 1.00000
15 1.42999 osd.15 up 1.00000 1.00000
16 1.42999 osd.16 up 1.00000 1.00000
17 1.42999 osd.17 up 1.00000 1.00000
-7 8.57996 host uat-l-stor-14
18 1.42999 osd.18 up 1.00000 1.00000
19 1.42999 osd.19 up 1.00000 1.00000
20 1.42999 osd.20 up 1.00000 1.00000
21 1.42999 osd.21 up 1.00000 1.00000
22 1.42999 osd.22 up 1.00000 1.00000
23 1.42999 osd.23 up 1.00000 1.00000

I believe my mistake lies in this function:

def get_upgrade_position(osd_sorted_list, match_name):
    """Return the upgrade position for the given osd.

    :param osd_sorted_list: list. Osds sorted
    :param match_name: str. The osd name to match
    :returns: int. The position or None if not found
    """
    for index, item in enumerate(osd_sorted_list):
        log('index %s' % index ,DEBUG)
        log('item.name %s' % item.name ,DEBUG)
        log('match_name %s' % match_name ,DEBUG)

        if item.name == match_name:
            log('pass in if %s' % index, DEBUG)
            return index
    return None

See the logs below if it is never true is function return None "DEBUG juju-log upgrade position None"

2018-10-09 23:41:29 DEBUG worker.uniter.jujuc server.go:178 running hook tool "juju-log"
2018-10-09 23:41:29 DEBUG juju-log index 0
2018-10-09 23:41:29 DEBUG worker.uniter.jujuc server.go:178 running hook tool "juju-log"
2018-10-09 23:41:29 DEBUG juju-log item.name sata
2018-10-09 23:41:29 DEBUG worker.uniter.jujuc server.go:178 running hook tool "juju-log"
2018-10-09 23:41:29 DEBUG juju-log match_name uat-l-stor-11
2018-1...

Read more...

Revision history for this message
James Page (james-page) wrote :

Re-opening this bug as I don't think we quite have it fixed based on comments from Bruno and Fabio.

Changed in charms.ceph:
status: Fix Released → New
Changed in charm-ceph-osd:
status: Fix Released → New
Revision history for this message
James Page (james-page) wrote :

Please could you attach the output of:

   ceph osd tree --format=json

so I can write a test case to reproduce.

Revision history for this message
Bruno Carvalho (brunowcs) wrote :
Download full text (5.4 KiB)

My problem is on the line 583

parent_nodes = [
                node for node in json_tree['nodes'] if node['type'] == 'root']

See instruction if node['type'] == 'root' is limited only tree default root but not work com other crushmap equal my.

I have two rack below root, for this function working with my crushmap, I need change root for rack.

This function could be more flexible.

# ceph osd tree --format=json

{"nodes":[{"id":-1,"name":"default","type":"root","type_id":10,"children":[-3,-2]},{"id":-2,"name":"sata","type":"rack","type_id":3,"pool_weights":{},"children":[-5,-4]},{"id":-4,"name":"uat-l-stor-11","type":"host","type_id":1,"pool_weights":{},"children":[5,4,3,2,1,0]},{"id":0,"device_class":"hdd","name":"osd.0","type":"osd","type_id":0,"crush_weight":5.455994,"depth":3,"pool_weights":{},"exists":1,"status":"up","reweight":1.000000,"primary_affinity":1.000000},{"id":1,"device_class":"hdd","name":"osd.1","type":"osd","type_id":0,"crush_weight":5.455994,"depth":3,"pool_weights":{},"exists":1,"status":"up","reweight":1.000000,"primary_affinity":1.000000},{"id":2,"device_class":"hdd","name":"osd.2","type":"osd","type_id":0,"crush_weight":5.455994,"depth":3,"pool_weights":{},"exists":1,"status":"up","reweight":1.000000,"primary_affinity":1.000000},{"id":3,"device_class":"hdd","name":"osd.3","type":"osd","type_id":0,"crush_weight":5.455994,"depth":3,"pool_weights":{},"exists":1,"status":"up","reweight":1.000000,"primary_affinity":1.000000},{"id":4,"device_class":"hdd","name":"osd.4","type":"osd","type_id":0,"crush_weight":5.455994,"depth":3,"pool_weights":{},"exists":1,"status":"up","reweight":1.000000,"primary_affinity":1.000000},{"id":5,"device_class":"hdd","name":"osd.5","type":"osd","type_id":0,"crush_weight":5.455994,"depth":3,"pool_weights":{},"exists":1,"status":"up","reweight":1.000000,"primary_affinity":1.000000},{"id":-5,"name":"uat-l-stor-12","type":"host","type_id":1,"pool_weights":{},"children":[11,10,9,8,7,6]},{"id":6,"device_class":"hdd","name":"osd.6","type":"osd","type_id":0,"crush_weight":5.455994,"depth":3,"pool_weights":{},"exists":1,"status":"up","reweight":1.000000,"primary_affinity":1.000000},{"id":7,"device_class":"hdd","name":"osd.7","type":"osd","type_id":0,"crush_weight":5.455994,"depth":3,"pool_weights":{},"exists":1,"status":"up","reweight":1.000000,"primary_affinity":1.000000},{"id":8,"device_class":"hdd","name":"osd.8","type":"osd","type_id":0,"crush_weight":5.455994,"depth":3,"pool_weights":{},"exists":1,"status":"up","reweight":1.000000,"primary_affinity":1.000000},{"id":9,"device_class":"hdd","name":"osd.9","type":"osd","type_id":0,"crush_weight":5.455994,"depth":3,"pool_weights":{},"exists":1,"status":"up","reweight":1.000000,"primary_affinity":1.000000},{"id":10,"device_class":"hdd","name":"osd.10","type":"osd","type_id":0,"crush_weight":5.455994,"depth":3,"pool_weights":{},"exists":1,"status":"up","reweight":1.000000,"primary_affinity":1.000000},{"id":11,"device_class":"hdd","name":"osd.11","type":"osd","type_id":0,"crush_weight":5.455994,"depth":3,"pool_weights":{},"exists":1,"status":"up","reweight":1.000000,"primary_affinity":1.000000},{"id":-3,"name":"ssd","type":"rack","type_id":3,"p...

Read more...

Revision history for this message
James Page (james-page) wrote :
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charms.ceph (master)

Reviewed: https://review.openstack.org/609759
Committed: https://git.openstack.org/cgit/openstack/charms.ceph/commit/?id=7ff0534471c77231307f7c4668fafccee4a9c4bc
Submitter: Zuul
Branch: master

commit 7ff0534471c77231307f7c4668fafccee4a9c4bc
Author: James Page <email address hidden>
Date: Thu Oct 11 17:18:36 2018 +0100

    Support multi-tier hierarchy in Ceph OSD tree

    Fully support multi-tier hierarchy within the Ceph OSD tree; this
    change refactors the get_osd_tree function to simply filter for
    nodes of type 'host' which was in effect what the original code
    did, but in a more efficient manner.

    This change includes a set of test data taken directly from
    the bug report.

    Change-Id: I4734a5f6279c02f13bc4eafec15851968881dd5a
    Closes-Bug: 1788722

Changed in charms.ceph:
status: New → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on charm-ceph-osd (master)

Change abandoned by James Page (<email address hidden>) on branch: master
Review: https://review.openstack.org/595914
Reason: Fixed under another review and then refactored completely!

James Page (james-page)
Changed in charm-ceph-osd:
status: New → In Progress
assignee: Chris MacNaughton (chris.macnaughton) → James Page (james-page)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-ceph-osd (master)

Reviewed: https://review.openstack.org/609720
Committed: https://git.openstack.org/cgit/openstack/charm-ceph-osd/commit/?id=ce97b7a479388dac23899de0a3c7f5c1c4ee2d84
Submitter: Zuul
Branch: master

commit ce97b7a479388dac23899de0a3c7f5c1c4ee2d84
Author: James Page <email address hidden>
Date: Thu Oct 11 15:15:07 2018 +0100

    Resync ceph helpers

    Resync ceph helpers, picking up fixes for:

     - Upgrades from Luminous to Mimic.
     - Correct build of OSD list in more complex CRUSH
       configurations, resolving upgrade issues.

    Closes-Bug: 1788722

    Change-Id: I7d8fca74ec6eadae21a6e669e8b2522d9e4c9367

Changed in charm-ceph-osd:
status: In Progress → Fix Committed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-ceph-osd (stable/18.08)

Fix proposed to branch: stable/18.08
Review: https://review.openstack.org/611618

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-ceph-osd (stable/18.08)

Reviewed: https://review.openstack.org/611618
Committed: https://git.openstack.org/cgit/openstack/charm-ceph-osd/commit/?id=d4a964a8d78c4f739808135ad49039cf28b1ba09
Submitter: Zuul
Branch: stable/18.08

commit d4a964a8d78c4f739808135ad49039cf28b1ba09
Author: James Page <email address hidden>
Date: Thu Oct 11 15:15:07 2018 +0100

    Resync ceph helpers

    Resync ceph helpers, picking up fixes for:

     - Upgrades from Luminous to Mimic.
     - Correct build of OSD list in more complex CRUSH
       configurations, resolving upgrade issues.

    Closes-Bug: 1788722

    Change-Id: I7d8fca74ec6eadae21a6e669e8b2522d9e4c9367
    (cherry picked from commit ce97b7a479388dac23899de0a3c7f5c1c4ee2d84)

Revision history for this message
James Page (james-page) wrote :

This work was completed a while back - marking fix released.

Changed in charm-ceph-osd:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.