Cannot detach storage
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ceph OSD Charm |
Triaged
|
Low
|
Unassigned |
Bug Description
Juju says the detachment of the storage was successful but Ceph appears to ignore it.
snap info juju | grep installed
installed: 2.3-alpha1+
===
juju storage | grep osd-devices
ceph-osd/0 osd-devices/0 block ebs vol-05015701126
ceph-osd/0 osd-devices/1 block ebs vol-04345b11abe
ceph-osd/1 osd-devices/3 block ebs vol-0eaada2acc2
ceph-osd/1 osd-devices/4 block ebs vol-08d7fefb838
ceph-osd/2 osd-devices/6 block ebs vol-0d0a9a3a8cb
ceph-osd/2 osd-devices/7 block ebs vol-0e0b54e0b70
===
juju detach-storage osd-devices/7
detaching osd-devices/7
===
juju storage | grep osd-devices
ceph-osd/0 osd-devices/0 block ebs vol-05015701126
ceph-osd/0 osd-devices/1 block ebs vol-04345b11abe
ceph-osd/1 osd-devices/3 block ebs vol-0eaada2acc2
ceph-osd/1 osd-devices/4 block ebs vol-08d7fefb838
ceph-osd/2 osd-devices/6 block ebs vol-0d0a9a3a8cb
===
juju status ceph-osd
Model Controller Cloud/Region Version SLA
ceph aws-controller aws/us-east-1 2.3-alpha1.1 unsupported
App Version Status Scale Charm Store Rev OS Notes
ceph-osd 10.2.7 active 3 ceph-osd jujucharms 245 ubuntu
Unit Workload Agent Machine Public address Ports Message
ceph-osd/0 active idle 3 54.197.85.213 Unit is ready (2 OSD)
ceph-osd/1* active idle 4 54.146.68.53 Unit is ready (2 OSD)
ceph-osd/2 active idle 5 54.224.59.60 Unit is ready (2 OSD) <----------- SHOULD BE 1 OSD
Machine State DNS Inst id Series AZ Message
3 started 54.197.85.213 i-0a421dca4221bcf60 xenial us-east-1c running
4 started 54.146.68.53 i-016d906c158e2e2c0 xenial us-east-1e running
5 started 54.224.59.60 i-0d4b087f32320a560 xenial us-east-1a running
Relation Provides Consumes Type
mon ceph-mon ceph-osd regular
===
juju ssh ceph-osd/2 ps ax | grep /usr/bin/ceph-osd
7198 ? Ssl 0:55 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
7949 ? Ssl 0:50 /usr/bin/ceph-osd -f --cluster ceph --id 4 --setuser ceph --setgroup ceph
The charm reports the number of OSDs as the number of running OSD processes, so from this perspective the charm is reporting the correct value since it appears that the process is running. To better determine where the issue might lie, is the storage truly removed, or is the disk actually still attached to the running instance?