hook error on deployment on power8: mon-relation-changed
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
ceph-osd (Juju Charms Collection) |
Expired
|
Low
|
Unassigned |
Bug Description
Hi, deploying ceph on power8 we are consistently seeing a ceph-osd hook error 'mon-relation-
Here's the error from the ceph-osd agent log. We see that it seems to be trying to prepare a disk that is already up and running as an osd.
2016-05-11 14:39:47 INFO mon-relation-
2016-05-11 14:39:47 INFO worker.uniter.jujuc server.go:172 running hook tool "juju-log" ["-l" "ERROR" "Unable to initialize device: /dev/disk/
2016-05-11 14:39:47 ERROR juju-log mon:49: Unable to initialize device: /dev/disk/
2016-05-11 14:39:47 INFO mon-relation-
2016-05-11 14:39:47 INFO mon-relation-
2016-05-11 14:39:47 INFO mon-relation-
2016-05-11 14:39:47 INFO mon-relation-
2016-05-11 14:39:47 INFO mon-relation-
2016-05-11 14:39:47 INFO mon-relation-
2016-05-11 14:39:47 INFO mon-relation-
2016-05-11 14:39:47 INFO mon-relation-
2016-05-11 14:39:47 INFO mon-relation-
2016-05-11 14:39:47 INFO mon-relation-
2016-05-11 14:39:47 INFO mon-relation-
2016-05-11 14:39:47 INFO mon-relation-
2016-05-11 14:39:47 INFO juju.worker.
2016-05-11 14:39:47 ERROR juju.worker.
Full ceph-osd agent log here: http://
ceph-osd:
charm: cs:trusty/
can-upgrade-to: cs:trusty/
exposed: false
service-status:
current: error
message: 'hook failed: "mon-relation-
since: 11 May 2016 09:39:48-05:00
relations:
mon:
- ceph
units:
ceph-osd/0:
current: error
message: 'hook failed: "mon-relation-
since: 11 May 2016 09:39:48-05:00
current: idle
since: 11 May 2016 09:39:48-05:00
version: 1.25.5
machine: "4"
ceph-osd/1:
current: error
message: 'hook failed: "mon-relation-
since: 29 Apr 2016 15:26:05-05:00
current: lost
message: agent is not communicating with the server
since: 29 Apr 2016 15:26:05-05:00
version: 1.25.5
machine: "5"
Bundle sections:
ceph:
annotations:
gui-x: '750'
gui-y: '500'
charm: cs:trusty/ceph
num_units: 3
options:
fsid: cbd8508e-
monitor-
osd-devices: '/dev/disk/
osd-reformat: 'yes'
source: cloud:trusty-
to:
- '1'
- '2'
- '3'
ceph-osd:
annotations:
gui-x: '1000'
gui-y: '500'
charm: cs:trusty/ceph-osd
num_units: 2
options:
osd-devices: '/dev/disk/
osd-reformat: 'yes'
source: cloud:trusty-
to:
- '4'
- '5'
Looks like the error that surfaced is that the disk was mounted and the mkfs failed. Which version of the ceph-osd charm are you using? The newest version has better detection around mounted drives.