Comment 3 for bug 1374999

Revision history for this message
Rafael David Tinoco (rafaeldtinoco) wrote :

Finally, after some time, I could reproduce this issue using regular lvm+iscsi backend:

root@ostacktrustycomp:~# multipath -F
root@ostacktrustycomp:~# multipath -ll
<nothing>

root@ostacktrustycontrol:~# nova volume-attach demo-instance1 `nova volume-list | grep avai | awk '{print $2}'`

root@ostacktrustycomp:~# multipath -ll
33000000100000001 dm-1 IET,VIRTUAL-DISK
size=1.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 3:0:0:1 sda 8:0 active ready running

root@ostacktrustycontrol:~# nova volume-detach demo-instance1 `nova volume-list | grep "volume" | awk '{print $2}'`

root@ostacktrustycomp:~# multipath -ll
33000000100000001 dm-1 ,
size=1.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  `- #:#:#:# - #:# active faulty running

Although latest path was "faulty"... it was still there.

I'm having some problems on getting the second path activated on the nova-compute node (using cinder lvm backend + libvirt multipath) but this is showing the behavior..

My next step is to review libvirt multipath code (for the addition of the second path) AND the removal issue (reported here).

Thank you very much

Rafael Tinoco