patch for multipath disks
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ceph OSD Charm |
Invalid
|
Undecided
|
Unassigned | ||
lvm2 (Ubuntu) |
New
|
Undecided
|
Unassigned |
Bug Description
According to this page:
https:/
there is a bug in ceph that affects the use of FibreChannel disks.
The following patch to file hooks/ceph_
*** hooks/ceph_hooks.py 2018-02-02 16:15:40.304388602 +0100
--- hooks/ceph_
***************
*** 369,378 ****
@hooks.
def prepare_
- # patch for missing dm devices
- # see: https:/
- patch_persisten
-
osd_journal = get_journal_
check_
log("got journal devs: {}".format(
--- 369,374 ----
***************
*** 558,579 ****
log('Updating status.')
- # patch for missing dm devices
- from subprocess import check_call
-
-
- CEPH_PERSITENT_
-
-
- def patch_persisten
- if os.path.
- check_call(['sed', '-i', 's/KERNEL!
- CEPH_PERSITENT_
- log('Patched %s' % CEPH_PERSITENT_
- else:
- log('Missing %s' % CEPH_PERSITENT_
-
-
if __name__ == '__main__':
try:
--- 554,559 ——
affects: | udev (Ubuntu) → lvm2 (Ubuntu) |
Ok so the charm is a convenient place to hotfix this for an impacted deployment, but the right place to fix this is actually in the udev package so switching the bug task to the right target.