Comment 2 for bug 1708212

Revision history for this message
Andrew Wilkins (axwalk) wrote :

Billy, I can reproduce this with the following (on AWS):
 - juju deploy -n 3 ceph-mon
 - juju deploy -n 3 ceph-osd --storage osd-devices=ebs,32G,2 --storage osd-journals=ebs,8G,1
 - juju add-relation ceph-mon ceph-osd

(wait for all to settle)

Run "juju storage" to list the storage connected to ceph-osd/0, and detach one of the osd-devices storage instance. In my case, I have osd-devices/0 attached to ceph-osd/0, so I run:

 - juju detach-storage osd-devices/0

Wait, and it will show as "detached" in Juju. It seems that we're marking it as detached too eagerly. In fact, the EBS volume is still attached to the machine, and the reason for that is that the block device is still mounted and in use by the ceph-osd process.

The ceph-osd charm should be handling the "storage-detaching" hook by stopping the ceph-osd process, and unmounting the block device. Unless that happens, the EBS volume cannot be cleanly detached from the machine.