This does not appear to be a bug. Have discussed this with Trent. After deploying an osd using sdb on gardener, lsblk shows, root@gardener:/var/log/juju# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 3.7T 0 disk ├─sda1 8:1 0 512M 0 part /boot/efi └─sda2 8:2 0 3.7T 0 part / sdb 8:16 0 3.7T 0 disk └─ceph--14427572--ae02--4a5b--89f8--bbb24d1a9155-osd--block--14427572--ae02--4a5b--89f8--bbb24d1a9155 253:0 0 3.7T 0 lvm sdc 8:32 0 3.7T 0 disk sdd Then created a partition on sdc, with a xfs fs on it, (did not bother mounting it) root@gardener:/var/log/juju# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 3.7T 0 disk ├─sda1 8:1 0 512M 0 part /boot/efi └─sda2 8:2 0 3.7T 0 part / sdb 8:16 0 3.7T 0 disk └─ceph--14427572--ae02--4a5b--89f8--bbb24d1a9155-osd--block--14427572--ae02--4a5b--89f8--bbb24d1a9155 253:0 0 3.7T 0 lvm sdc 8:32 0 3.7T 0 disk └─sdc1 <------- 8:33 0 467M 0 part sdd Created fs on sdc1. sdc1 partition is offset beyond 10MB of the device. root@gardener:/var/log/juju# wipefs /dev/sdc DEVICE OFFSET TYPE UUID LABEL sdc 0x1fe dos ^^^ 10.5M ubuntu@z-rotomvm33:~$ juju status Model Controller Cloud/Region Version SLA Timestamp reproducernikhil segmaas-default segmaas/default 2.9.10 unsupported 09:32:28Z App Version Status Scale Charm Store Channel Rev OS Message ceph-mon 15.2.13 active 1 ceph-mon charmstore stable 56 ubuntu Unit is ready and clustered ceph-osd 15.2.13 active 1 ceph-osd charmstore stable 312 ubuntu Unit is ready (1 OSD) Unit Workload Agent Machine Public address Ports Message ceph-mon/1* active idle 2 10.230.57.23 Unit is ready and clustered ceph-osd/0* active idle 1 10.230.57.22 Unit is ready (1 OSD) Machine State DNS Inst id Series AZ Message 1 started 10.230.57.22 gardener bionic default Deployed 2 started 10.230.57.23 z-rotomvm35 bionic default Deployed and now I tried to use sdc as a disk for the same OSD unit, using juju config ceph-osd osd-devices="/dev/sdb /dev/sdc" juju status shows, ubuntu@z-rotomvm33:~$ juju status Model Controller Cloud/Region Version SLA Timestamp reproducernikhil segmaas-default segmaas/default 2.9.10 unsupported 09:40:21Z App Version Status Scale Charm Store Channel Rev OS Message ceph-mon 15.2.13 active 1 ceph-mon charmstore stable 56 ubuntu Unit is ready and clustered ceph-osd 15.2.13 blocked 1 ceph-osd charmstore stable 312 ubuntu Non-pristine devices detected, consult `list-disks`, `zap-disk` and `blacklist-*` actions. <---- Unit Workload Agent Machine Public address Ports Message ceph-mon/1* active idle 2 10.230.57.23 Unit is ready and clustered ceph-osd/0* blocked idle 1 10.230.57.22 Non-pristine devices detected, consult `list-disks`, `zap-disk` and `blacklist-*` actions. Machine State DNS Inst id Series AZ Message 1 started 10.230.57.22 gardener bionic default Deployed 2 started 10.230.57.23 z-rotomvm35 bionic default Deployed So it's likely that (as Trent explained to me) curtin was wiping the first part of the disk, even when maas had no disk config. ----- additional note: The hexdump on the disk verified the first 2 MB of the sdb device was zeroed out when the model was destroyed and before we redeployed the OSD using the same sdb device. root@gardener:/var/log/juju# hexdump -C /dev/sdb | more 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00200000 08 07 72 06 00 00 09 01 ec 04 00 00 15 0e fe 46 |..r............F| 00200010 fa 5f 11 eb 89 9c 00 16 3e 00 05 ac 07 00 00 00 |._......>.......| 0x00200000 = 2097152 in decimal = 2MiB But the partition was created 10M ahead of the device start, so obviously juju is able to detect the signature and then not consider that device for an osd. I haven't tested with just a partition and no FS created on it, but it's unlikely that that result in different behavior, so for now we are assuming this is not a bug. -----