Redeployed my cluster and it happened again; one of the OSD nodes has the same journal partition linked to two of the OSD disks:
[root@node-1 ~]# ls -l /var/lib/ceph/osd/ceph-*/journal
lrwxrwxrwx 1 root root 9 Jun 4 16:34 /var/lib/ceph/osd/ceph-11/journal -> /dev/sda4
lrwxrwxrwx 1 root root 9 Jun 4 16:30 /var/lib/ceph/osd/ceph-2/journal -> /dev/sda4
lrwxrwxrwx 1 root root 9 Jun 4 16:30 /var/lib/ceph/osd/ceph-5/journal -> /dev/sda5
lrwxrwxrwx 1 root root 9 Jun 4 16:31 /var/lib/ceph/osd/ceph-10/journal -> /dev/sda7
From the puppet logs:
(/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd activate]/returns) change from notrun to 0 failed: ceph-deploy osd activate node-1:/dev/sde4:/dev/sda4 returned 1 instead of one of [0]
(/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) change from notrun to 0 failed: ceph-deploy osd prepare node-1:/dev/sde4:/dev/sda4 node-1:/dev/sdb4:/dev/sda5 returned 1 instead of one of [0]
ceph-deploy osd prepare node-1:/dev/sde4:/dev/sda4 node-1:/dev/sdb4:/dev/sda5 returned 1 instead of one of [0]
(/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd activate]/returns) change from notrun to 0 failed: ceph-deploy osd activate node-1:/dev/sdc4:/dev/sda4 node-1:/dev/sdd4:/dev/sda5 node-1:/dev/sde4:/dev/sda6 node-1:/dev/sdb4:/dev/sda7 returned 1 instead of one of [0]
ceph-deploy osd activate node-1:/dev/sdc4:/dev/sda4 node-1:/dev/sdd4:/dev/sda5 node-1:/dev/sde4:/dev/sda6 node-1:/dev/sdb4:/dev/sda7 returned 1 instead of one of [0]
Redeployed my cluster and it happened again; one of the OSD nodes has the same journal partition linked to two of the OSD disks:
[root@node-1 ~]# ls -l /var/lib/ ceph/osd/ ceph-*/ journal ceph/osd/ ceph-11/ journal -> /dev/sda4 ceph/osd/ ceph-2/ journal -> /dev/sda4 ceph/osd/ ceph-5/ journal -> /dev/sda5 ceph/osd/ ceph-10/ journal -> /dev/sda7
lrwxrwxrwx 1 root root 9 Jun 4 16:34 /var/lib/
lrwxrwxrwx 1 root root 9 Jun 4 16:30 /var/lib/
lrwxrwxrwx 1 root root 9 Jun 4 16:30 /var/lib/
lrwxrwxrwx 1 root root 9 Jun 4 16:31 /var/lib/
From the puppet logs:
(/Stage[ main]/Ceph: :Osd/Exec[ ceph-deploy osd activate]/returns) change from notrun to 0 failed: ceph-deploy osd activate node-1: /dev/sde4: /dev/sda4 returned 1 instead of one of [0] main]/Ceph: :Osd/Exec[ ceph-deploy osd prepare]/returns) change from notrun to 0 failed: ceph-deploy osd prepare node-1: /dev/sde4: /dev/sda4 node-1: /dev/sdb4: /dev/sda5 returned 1 instead of one of [0] /dev/sde4: /dev/sda4 node-1: /dev/sdb4: /dev/sda5 returned 1 instead of one of [0] main]/Ceph: :Osd/Exec[ ceph-deploy osd activate]/returns) change from notrun to 0 failed: ceph-deploy osd activate node-1: /dev/sdc4: /dev/sda4 node-1: /dev/sdd4: /dev/sda5 node-1: /dev/sde4: /dev/sda6 node-1: /dev/sdb4: /dev/sda7 returned 1 instead of one of [0] /dev/sdc4: /dev/sda4 node-1: /dev/sdd4: /dev/sda5 node-1: /dev/sde4: /dev/sda6 node-1: /dev/sdb4: /dev/sda7 returned 1 instead of one of [0]
(/Stage[
ceph-deploy osd prepare node-1:
(/Stage[
ceph-deploy osd activate node-1: