Comment 4 for bug 1178721

Revision history for this message
Jernej Jakob (jjakob) wrote :

Thank you for your help. I wasn't aware of the limitation (incompatibility) caused by co-deploying LVM and multipath, it was my assumption that multipath would be configured to take precedence over LVM for having PVs on multipath block devices, as having it the other way around, as in multiple paths over LVM logical volumes would not make sense. I can assume that this would be a pretty common scenario, as for example, you may have one volume mapped from SAN over FC on which the tools that LVM offers would be valuable.

It would seem that a simple check at boot time, checking whether LVM and multipath are co-deployed, starting multipath first, letting it create paths and then adding those found to LVM's filter.

I understand however that this is in no way common usage and requires a skillful enough administrator to be aware of the possible scenarios caused.

Regarding the multipath config itself, this is the default config, functionally the same as the one proposed for MSA2012fc by HP.
The virtual one is of course bogus... not to be used, just to demonstrate the failure mode. The actual servers will have a dual-port FC HBA with each port going to separate zones on a switch, with itself being connected with 2 cables each to the 2 controllers (4 in total) of the SAN, so there will be 2 paths for each volume. This is required for this SAN in order to have redundancy for controller failure.

I just tried this with the physical server, and it still doesn't work as it should. In lvm.conf, blacklisted all the sd* devices and allowed only /dev/disk/by-id. Or should even the id's of the MP volumes be blacklisted, only allowing the one single PV's id?