block device mapping timeout in compute
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Fix Released
|
Undecided
|
Akash Gangil | ||
Icehouse |
Fix Released
|
Undecided
|
Unassigned |
Bug Description
When booting instances passing in block-device and increasing the volume size , instances can go in to error state if the volume takes longer to create than the hard code value set in:
nova/compute/
def _await_
Here is the command used to repro:
nova boot --flavor ca8d889e-
--block-device source=
--nic net-id=
Test_Image_Instance
max_retries should be made configurable.
Looking through the different releases, Grizzly was 30, Havana was 60 , IceHouse is 180.
Here is a traceback:
2014-06-19 06:54:24.303 17578 ERROR nova.compute.
d0b8f2c3cf70445
74f612ea-
2014-06-19 06:54:24.303 17578 TRACE nova.compute.
Traceback (most recent call last):
2014-06-19 06:54:24.303 17578 TRACE nova.compute.
File "/usr/lib/
in _prep_block_device
2014-06-19 06:54:24.303 17578 TRACE nova.compute.
self._await_
2014-06-19 06:54:24.303 17578 TRACE nova.compute.
File "/usr/lib/
in attach_
2014-06-19 06:54:24.303 17578 TRACE nova.compute.
block_device_
2014-06-19 06:54:24.303 17578 TRACE nova.compute.
File "/usr/lib/
in attach
2014-06-19 06:54:24.303 17578 TRACE nova.compute.
wait_func(context, vol['id'])
2014-06-19 06:54:24.303 17578 TRACE nova.compute.
File "/usr/lib/
in _await_
2014-06-19 06:54:24.303 17578 TRACE nova.compute.
attempts=attempts)
2014-06-19 06:54:24.303 17578 TRACE nova.compute.
VolumeNotCreated: Volume 8489549e-
being created even after we waited 65 seconds or 60 attempts.
2014-06-19 06:54:24.303 17578 TRACE nova.compute.
tags: | added: volumes |
Changed in nova: | |
assignee: | Aaron Rosen (arosen) → akash (akashg1611) |
Changed in nova: | |
status: | Confirmed → In Progress |
Changed in nova: | |
milestone: | none → juno-2 |
status: | Fix Committed → Fix Released |
Changed in nova: | |
milestone: | juno-2 → 2014.2 |
tags: | added: icehouse-backport-potential |
Thanks for the bug report. I think we should make this timeout configurable so it can be set to a higher value than 60 seconds if it sometimes takes more time then that.