ceph-osd deploy on partition fails

Bug #1735087 reported by KamalaVenkatesh
This bug affects 1 person
Affects Status Importance Assigned to Milestone
ceph-osd (Juju Charms Collection)

Bug Description

Background: I wanted to deploy ceph-osd on single-disk node. Where Operating system and ceph-osd block device shares the same disk (by creating partitions).

Issue : Failed to deploy ceph-osd on partition with the following error on juju status “Unable to find block device /dev/sda2”

Observation: On analysing logs, could observe that ceph-disk prepare command was executed for disk partition but the activation of disk partition logs were missing.

Revision history for this message
KamalaVenkatesh (kamala) wrote :

Further debugging of code, led me to the below function in “cs:ceph-osd-249” charm

def osdize_dev(dev, osd_format, osd_journal, reformat_osd=False,
ignore_errors=False, encrypt=False, bluestore=False):

Where the command for activation of disk partition were missing and could handle only the activation of entire disk.

Added “ceph-disk activate” command in the above mentioned function after “ceph-disk prepare” in lib/ceph/util.py. On adding this command deployment of ceph-osd on partition was successful.

Note: During the addition of “ceph-disk activate” command , ran into one more issue where an empty value in the partition array seen , when tried to split using split method was throwing an error due to inaccessibility of the value. Addition of space as a delimiter while splitting partition array resolved the issue.

Revision history for this message
KamalaVenkatesh (kamala) wrote :

Issue regarding empty value in the partition array is fixed.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers