ceph-osd deploy on partition fails

Bug #1735087 reported by KamalaVenkatesh
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
ceph-osd (Juju Charms Collection)
New
Undecided
Unassigned

Bug Description

Background: I wanted to deploy ceph-osd on single-disk node. Where Operating system and ceph-osd block device shares the same disk (by creating partitions).

Issue : Failed to deploy ceph-osd on partition with the following error on juju status “Unable to find block device /dev/sda2”

Observation: On analysing logs, could observe that ceph-disk prepare command was executed for disk partition but the activation of disk partition logs were missing.

Revision history for this message
KamalaVenkatesh (kamala) wrote :

Further debugging of code, led me to the below function in “cs:ceph-osd-249” charm

def osdize_dev(dev, osd_format, osd_journal, reformat_osd=False,
ignore_errors=False, encrypt=False, bluestore=False):
…..

Where the command for activation of disk partition were missing and could handle only the activation of entire disk.

Added “ceph-disk activate” command in the above mentioned function after “ceph-disk prepare” in lib/ceph/util.py. On adding this command deployment of ceph-osd on partition was successful.

Note: During the addition of “ceph-disk activate” command , ran into one more issue where an empty value in the partition array seen , when tried to split using split method was throwing an error due to inaccessibility of the value. Addition of space as a delimiter while splitting partition array resolved the issue.

Revision history for this message
KamalaVenkatesh (kamala) wrote :

Issue regarding empty value in the partition array is fixed.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.