Although latest path was "faulty"... it was still there.
I'm having some problems on getting the second path activated on the nova-compute node (using cinder lvm backend + libvirt multipath) but this is showing the behavior..
My next step is to review libvirt multipath code (for the addition of the second path) AND the removal issue (reported here).
Finally, after some time, I could reproduce this issue using regular lvm+iscsi backend:
root@ostacktrus tycomp: ~# multipath -F tycomp: ~# multipath -ll
root@ostacktrus
<nothing>
root@ostacktrus tycontrol: ~# nova volume-attach demo-instance1 `nova volume-list | grep avai | awk '{print $2}'`
root@ostacktrus tycomp: ~# multipath -ll
33000000100000001 dm-1 IET,VIRTUAL-DISK
size=1.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 3:0:0:1 sda 8:0 active ready running
root@ostacktrus tycontrol: ~# nova volume-detach demo-instance1 `nova volume-list | grep "volume" | awk '{print $2}'`
root@ostacktrus tycomp: ~# multipath -ll
33000000100000001 dm-1 ,
size=1.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
`- #:#:#:# - #:# active faulty running
Although latest path was "faulty"... it was still there.
I'm having some problems on getting the second path activated on the nova-compute node (using cinder lvm backend + libvirt multipath) but this is showing the behavior..
My next step is to review libvirt multipath code (for the addition of the second path) AND the removal issue (reported here).
Thank you very much
Rafael Tinoco