Error description
--------------------------
I used NFS as the cinder storage backend and successfully attached multiple volumes to nova instances.
However, when I tried to detach one them, I found following error on nova-compute.log.
For NFS volumes, every time you detach a volume, nova tries to umount the device path.
/nova/virt/libvirt/volume.py in
Line 632: class LibvirtNFSVolumeDriver(LibvirtBaseVolumeDriver):
Line 653: def disconnect_volume(self, connection_info, disk_dev):
Line 661: utils.execute('umount', mount_path, run_as_root=True)
This works when the device path is not busy.
If the device path is busy (or in use), it should output a message to log and continue.
The problem is, Instead of output a log message, it raise exception and that cause the above error.
I think the reason is, the ‘if’ statement at Line 663 fails to catch the device busy massage from the content of the exc.message. It looking for the ‘target is busy’ in the exc.message, but umount error code returns ‘device is busy’.
Therefore, current code skip the ‘if’ statement and run the ‘else’ and raise the exception.
How to reproduce
--------------------------
(1) Prepare a NFS share storage and set it as the storage backend of you cinder
(refer http://docs.openstack.org/grizzly/openstack-block-storage/admin/content/NFS-driver.html)
In cinder.conf
volume_driver=cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config=<path to your nfs share list file>
(2) Create 2 empty volumes from cinder
(3) Create a nova instance and attach above 2 volumes
(4) Then, try to detach one of them.
You will get the error in nova-compute.log “Couldn't unmount the NFS share <your NFS mount path on nova-compute>”
Proposed Fix
--------------------------
I’m not sure about any other OSs who outputs the ‘target is busy’ in the umount error code.
Therefore, first fix comes to my mind is fix the ‘if’ statement to:
Before fix;
if 'target is busy' in exc.message:
After fix;
if 'device is busy' in exc.message:
Tested Environment ------- ------- ----- driver= cinder. volume. drivers. nfs.NfsDriver
-------
OS: Ubuntu 14.04 LST
Cinder NFS driver:
volume_
Error description ------- ------- -----
-------
I used NFS as the cinder storage backend and successfully attached multiple volumes to nova instances.
However, when I tried to detach one them, I found following error on nova-compute.log.
2014-07-07 17:48:46.175 3195 ERROR nova.virt. libvirt. volume [req-a07d077f- 2ad1-4558- 91fa-ab1895ca49 14 c8ac60023a794ae d8cec8552110d5f 12 fdd538eb5dbf48a 98d08e6d64def73 d7] Couldn't unmount the NFS share 172.23. 58.245: /NFSThinLun2 libvirt. volume Traceback (most recent call last): libvirt. volume File "/usr/local/ lib/python2. 7/dist- packages/ nova/virt/ libvirt/ volume. py", line 675, in disconnect_volume libvirt. volume utils.execute( 'umount' , mount_path, run_as_root=True) libvirt. volume File "/usr/local/ lib/python2. 7/dist- packages/ nova/utils. py", line 164, in execute libvirt. volume return processutils. execute( *cmd, **kwargs) libvirt. volume File "/usr/local/ lib/python2. 7/dist- packages/ nova/openstack/ common/ processutils. py", line 193, in execute libvirt. volume cmd=' '.join(cmd)) libvirt. volume ProcessExecutio nError: Unexpected error while running command. libvirt. volume Command: sudo nova-rootwrap /etc/nova/ rootwrap. conf umount /var/lib/ nova/mnt/ 16a381ac60f3e13 0cf26e7d6eb832c b6 libvirt. volume Exit code: 16 libvirt. volume Stdout: '' libvirt. volume Stderr: 'umount.nfs: /var/lib/ nova/mnt/ 16a381ac60f3e13 0cf26e7d6eb832c b6: device is busy\numount.nfs: /var/lib/ nova/mnt/ 16a381ac60f3e13 0cf26e7d6eb832c b6: device is busy\n' libvirt. volume
2014-07-07 17:48:46.175 3195 TRACE nova.virt.
2014-07-07 17:48:46.175 3195 TRACE nova.virt.
2014-07-07 17:48:46.175 3195 TRACE nova.virt.
2014-07-07 17:48:46.175 3195 TRACE nova.virt.
2014-07-07 17:48:46.175 3195 TRACE nova.virt.
2014-07-07 17:48:46.175 3195 TRACE nova.virt.
2014-07-07 17:48:46.175 3195 TRACE nova.virt.
2014-07-07 17:48:46.175 3195 TRACE nova.virt.
2014-07-07 17:48:46.175 3195 TRACE nova.virt.
2014-07-07 17:48:46.175 3195 TRACE nova.virt.
2014-07-07 17:48:46.175 3195 TRACE nova.virt.
2014-07-07 17:48:46.175 3195 TRACE nova.virt.
2014-07-07 17:48:46.175 3195 TRACE nova.virt.
For NFS volumes, every time you detach a volume, nova tries to umount the device path. libvirt/ volume. py in eDriver( LibvirtBaseVolu meDriver) : volume( self, connection_info, disk_dev): 'umount' , mount_path, run_as_root=True)
/nova/virt/
Line 632: class LibvirtNFSVolum
Line 653: def disconnect_
Line 661: utils.execute(
This works when the device path is not busy.
If the device path is busy (or in use), it should output a message to log and continue.
The problem is, Instead of output a log message, it raise exception and that cause the above error.
I think the reason is, the ‘if’ statement at Line 663 fails to catch the device busy massage from the content of the exc.message. It looking for the ‘target is busy’ in the exc.message, but umount error code returns ‘device is busy’.
Therefore, current code skip the ‘if’ statement and run the ‘else’ and raise the exception.
How to reproduce ------- ------- ----- docs.openstack. org/grizzly/ openstack- block-storage/ admin/content/ NFS-driver. html) driver= cinder. volume. drivers. nfs.NfsDriver config= <path to your nfs share list file>
-------
(1) Prepare a NFS share storage and set it as the storage backend of you cinder
(refer http://
In cinder.conf
volume_
nfs_shares_
(2) Create 2 empty volumes from cinder
(3) Create a nova instance and attach above 2 volumes
(4) Then, try to detach one of them.
You will get the error in nova-compute.log “Couldn't unmount the NFS share <your NFS mount path on nova-compute>”
Proposed Fix ------- ------- -----
-------
I’m not sure about any other OSs who outputs the ‘target is busy’ in the umount error code.
Therefore, first fix comes to my mind is fix the ‘if’ statement to:
Before fix;
if 'target is busy' in exc.message:
After fix;
if 'device is busy' in exc.message: