disks with existing bluestore LVM data will be reformatted without warning unlike filestore

Bug #1841010 reported by Trent Lloyd
14
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Ceph OSD Charm
Triaged
High
Unassigned

Bug Description

If you deploy ceph-osd to a machine whose disks already have bluestore LVM data from a previous install, it will reformat the disk and provision it for the new cluster.

This is not expected, as for filestore it was implemented to specifically require zapping non-clean disks to prevent accidental data loss, such as was experienced with the osd-reformat option.

This doesn't normally happen for the same install, as the charm keeps track of device paths which have been initialized, it's still a concern that the charm will over-write a non-empty disk as there may be bugs or scenarios where that could occur.

= Steps to reproduce =

To reproduce, deploy ceph-osd to a physical machine (MAAS), delete the model (without wiping the disks) and redeploy it.

= Desired behavior =

The charm should only initialize a device if it has no data at all, e.g. no label of any kind.

Tags: seg
Changed in charm-ceph-osd:
status: New → Triaged
importance: Undecided → High
Felipe Reyes (freyes)
tags: added: seg
Revision history for this message
nikhil kshirsagar (nkshirsagar) wrote :
Download full text (5.5 KiB)

This does not appear to be a bug. Have discussed this with Trent.

After deploying an osd using sdb on gardener, lsblk shows,

root@gardener:/var/log/juju# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 3.7T 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
└─sda2 8:2 0 3.7T 0 part /
sdb 8:16 0 3.7T 0 disk
└─ceph--14427572--ae02--4a5b--89f8--bbb24d1a9155-osd--block--14427572--ae02--4a5b--89f8--bbb24d1a9155 253:0 0 3.7T 0 lvm
sdc 8:32 0 3.7T 0 disk
sdd

Then created a partition on sdc, with a xfs fs on it, (did not bother mounting it)

root@gardener:/var/log/juju# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 3.7T 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
└─sda2 8:2 0 3.7T 0 part /
sdb 8:16 0 3.7T 0 disk
└─ceph--14427572--ae02--4a5b--89f8--bbb24d1a9155-osd--block--14427572--ae02--4a5b--89f8--bbb24d1a9155 253:0 0 3.7T 0 lvm
sdc 8:32 0 3.7T 0 disk
└─sdc1 <------- 8:33 0 467M 0 part
sdd

Created fs on sdc1. sdc1 partition is offset beyond 10MB of the device.

root@gardener:/var/log/juju# wipefs /dev/sdc
DEVICE OFFSET TYPE UUID LABEL
sdc 0x1fe dos
        ^^^
      10.5M

ubuntu@z-rotomvm33:~$ juju status
Model Controller Cloud/Region Version SLA Timestamp
reproducernikhil segmaas-default segmaas/default 2.9.10 unsupported 09:32:28Z

App Version Status Scale Charm Store Channel Rev OS Message
ceph-mon 15.2.13 active 1 ceph-mon charmstore stable 56 ubuntu Unit is ready and clustered
ceph-osd 15.2.13 active 1 ceph-osd charmstore stable 312 ubuntu Unit is ready (1 OSD)

Unit Workload Agent Machine Public address Ports Message
ceph-mon/1* active idle 2 10.230.57.23 Unit is ready and clustered
ceph-osd/0* active idle 1 10.230.57.22 Unit is rea...

Read more...

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.