Comment 0 for bug 1567036

Revision history for this message
Ryan Beisner (1chb1n) wrote :

When deploying ceph via charm to a system with multipath storage, the block device names and order may not be predictable. This essentially makes automated deployment of ceph via the charm onto these systems impossible.

Take:

/dev/sda
/dev/sdb
/dev/sdc
/dev/sdd
/dev/sde
/dev/sdf

Those devices are not usable by ceph (errors out, indicating they are in use).

The corresponding /dev/dm-N devices would be usable, I believe, if they were predictable. But they are not. ie /dev/dm-3 might represent /dev/sdc on one deployment, but after a tear down and redeploy, /dev/dm-3 might represent the disk that contains the root linux filesystem (doh!).

ex.

root@gregory-ppc64:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 264.3G 0 disk
├─sda1 8:1 0 8M 0 part
├─sda2 8:2 0 264.3G 0 part
└─mpath0 (dm-0) 252:0 0 264.3G 0 mpath
  ├─mpath0-part1 (dm-2) 252:2 0 8M 0 part
  └─mpath0-part2 (dm-3) 252:3 0 264.3G 0 part /
sdb 8:16 0 264.3G 0 disk
└─mpath2 (dm-1) 252:1 0 264.3G 0 mpath
sdc 8:32 0 264.3G 0 disk
└─mpath3 (dm-5) 252:5 0 264.3G 0 mpath
sdd 8:48 0 264.3G 0 disk
└─mpath4 (dm-4) 252:4 0 264.3G 0 mpath
sde 8:64 0 264.3G 0 disk
└─mpath5 (dm-6) 252:6 0 264.3G 0 mpath
sdf 8:80 0 264.3G 0 disk
└─mpath1 (dm-7) 252:7 0 264.3G 0 mpath
sdg 8:96 0 264.3G 0 disk
├─sdg1 8:97 0 8M 0 part
├─sdg2 8:98 0 264.3G 0 part
└─mpath0 (dm-0) 252:0 0 264.3G 0 mpath
  ├─mpath0-part1 (dm-2) 252:2 0 8M 0 part
  └─mpath0-part2 (dm-3) 252:3 0 264.3G 0 part /
sdh 8:112 0 264.3G 0 disk
└─mpath2 (dm-1) 252:1 0 264.3G 0 mpath
sdi 8:128 0 264.3G 0 disk
└─mpath3 (dm-5) 252:5 0 264.3G 0 mpath
sdj 8:144 0 264.3G 0 disk
└─mpath4 (dm-4) 252:4 0 264.3G 0 mpath
sdk 8:160 0 264.3G 0 disk
└─mpath5 (dm-6) 252:6 0 264.3G 0 mpath
sdl 8:176 0 264.3G 0 disk
└─mpath1 (dm-7) 252:7 0 264.3G 0 mpath
sr0 11:0 1 521.9M 0 rom