OpenStack Compute (Nova)

removing of iSCSI volumes failed because "Device or resource busy."

Reported by Christian Berendt on 2011-02-25
22
This bug affects 4 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Medium
Zhongyue Luo

Bug Description

I'm testing creating, attaching, detaching and removing of iSCSI volumes at the moment (5 seconds pause after each action).

After running the test for one hour I've some iSCSI volumes left. The couldn't be remove because "Device or resource busy." while running "ietadm --op delete --tid=20".

Maybe the pause of 5 seconds after each action is too short, but I think nova-volume should handle the case that a call to ietadm fails and should retry it 5 times or so. I tried to manually remove the targets after stopping the test and that's working.

---snip---
2011-02-25 23:08:16,547 DEBUG nova.volume.manager [-] volume volume-00000453: removing export from (pid=26259) delete_volume /usr/lib64/python2.6/site-packages/nova/volume/manager.py:140
2011-02-25 23:08:16,549 DEBUG nova.utils [-] Running cmd (subprocess): sudo ietadm --op show --tid=20 from (pid=26259) execute /usr/lib64/python2.6/site-packages/nova/utils.py:132
2011-02-25 23:08:16,570 DEBUG nova.utils [-] Running cmd (subprocess): sudo ietadm --op delete --tid=20 --lun=0 from (pid=26259) execute /usr/lib64/python2.6/site-packages/nova/utils.py:132
2011-02-25 23:08:16,591 DEBUG nova.utils [-] Running cmd (subprocess): sudo ietadm --op delete --tid=20 from (pid=26259) execute /usr/lib64/python2.6/site-packages/nova/utils.py:132
2011-02-25 23:08:16,612 DEBUG nova.utils [-] Result was 240 from (pid=26259) execute /usr/lib64/python2.6/site-packages/nova/utils.py:145
2011-02-25 23:08:16,628 ERROR nova [-] Exception during message handling
(nova): TRACE: Traceback (most recent call last):
(nova): TRACE: File "/usr/lib64/python2.6/site-packages/nova/rpc.py", line 199, in _receive
(nova): TRACE: rval = node_func(context=ctxt, **node_args)
(nova): TRACE: File "/usr/lib64/python2.6/site-packages/nova/volume/manager.py", line 141, in delete_volume
(nova): TRACE: self.driver.remove_export(context, volume_ref)
(nova): TRACE: File "/usr/lib64/python2.6/site-packages/nova/volume/driver.py", line 311, in remove_export
(nova): TRACE: iscsi_target)
(nova): TRACE: File "/usr/lib64/python2.6/site-packages/nova/utils.py", line 151, in execute
(nova): TRACE: cmd=cmd)
(nova): TRACE: ProcessExecutionError: Unexpected error while running command.
(nova): TRACE: Command: sudo ietadm --op delete --tid=20
(nova): TRACE: Exit code: 240
(nova): TRACE: Stdout: ''
(nova): TRACE: Stderr: 'Device or resource busy.\n'
(nova): TRACE:
---snap---

---snip---
VOLUME vol-00000453 1 nova error_deleting (berendt, deimos, None, None) 2011-02-25T22:08:00Z
---snap---

---snip---
chronos:~ # euca-delete-volume vol-00000453
Unknown: Unknown: Volume status must be available
---snap---

Christian Berendt (berendt) wrote :

I tested creation and deletion several times with the following script and got no issues. So it's probably a problem with the detaching of a volume.

---snip---
#!/bin/sh

for i in $(seq 1 30); do
  euca-create-volume -s 1 -z nova
done

sleep 30

for i in $(euca-describe-volumes | cut -f 2); do
  euca-delete-volume $i
done
---snap---

Vish Ishaya (vishvananda) wrote :

Perhaps libvirt isn't actually done with the volume when it returns from detach. We should probably put a delay in there unless we can figure out a way to loop until the volume is actually clear (check lsof in a loop?)

Thierry Carrez (ttx) on 2011-03-03
Changed in nova:
importance: Undecided → Medium
status: New → Confirmed
Zhongyue Luo (zyluo) on 2011-12-27
Changed in nova:
assignee: nobody → LZY (lzyeval)
Zhongyue Luo (zyluo) on 2011-12-27
Changed in nova:
status: Confirmed → Fix Committed
Thierry Carrez (ttx) on 2012-01-25
Changed in nova:
milestone: none → essex-3
status: Fix Committed → Fix Released
Thierry Carrez (ttx) on 2012-04-05
Changed in nova:
milestone: essex-3 → 2012.1
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers