test_encrypted_cinder_volumes_cryptsetup times out waiting for volume to be available

Bug #1348204 reported by Matt Riedemann
26
This bug affects 5 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Confirmed
High
Unassigned
Icehouse
New
Critical
Unassigned

Bug Description

http://logs.openstack.org/15/109115/1/check/check-tempest-dsvm-full/168a5dd/console.html#_2014-07-24_01_07_09_115

2014-07-24 01:07:09.116 | tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_cryptsetup[compute,image,volume]
2014-07-24 01:07:09.116 | ----------------------------------------------------------------------------------------------------------------------------------------
2014-07-24 01:07:09.116 |
2014-07-24 01:07:09.116 | Captured traceback:
2014-07-24 01:07:09.117 | ~~~~~~~~~~~~~~~~~~~
2014-07-24 01:07:09.117 | Traceback (most recent call last):
2014-07-24 01:07:09.117 | File "tempest/test.py", line 128, in wrapper
2014-07-24 01:07:09.117 | return f(self, *func_args, **func_kwargs)
2014-07-24 01:07:09.117 | File "tempest/scenario/test_encrypted_cinder_volumes.py", line 63, in test_encrypted_cinder_volumes_cryptsetup
2014-07-24 01:07:09.117 | self.attach_detach_volume()
2014-07-24 01:07:09.117 | File "tempest/scenario/test_encrypted_cinder_volumes.py", line 49, in attach_detach_volume
2014-07-24 01:07:09.117 | self.nova_volume_detach()
2014-07-24 01:07:09.117 | File "tempest/scenario/manager.py", line 757, in nova_volume_detach
2014-07-24 01:07:09.117 | self._wait_for_volume_status('available')
2014-07-24 01:07:09.117 | File "tempest/scenario/manager.py", line 710, in _wait_for_volume_status
2014-07-24 01:07:09.117 | self.volume_client.volumes, self.volume.id, status)
2014-07-24 01:07:09.118 | File "tempest/scenario/manager.py", line 230, in status_timeout
2014-07-24 01:07:09.118 | not_found_exception=not_found_exception)
2014-07-24 01:07:09.118 | File "tempest/scenario/manager.py", line 296, in _status_timeout
2014-07-24 01:07:09.118 | raise exceptions.TimeoutException(message)
2014-07-24 01:07:09.118 | TimeoutException: Request timed out
2014-07-24 01:07:09.118 | Details: Timed out waiting for thing 4ef6a14a-3fce-417f-aa13-5aab1789436e to become available

I've actually been seeing this out of tree in our internal CI also but thought it was just us or our slow VMs, this is the first I've seen it upstream.

From the traceback in the console log, it looks like the volume does get to available status because it doesn't get out of that state when tempest is trying to delete the volume on tear down.

Revision history for this message
Matt Riedemann (mriedem) wrote :

Nothing is really jumping out at me from the cinder volume log, there aren't errors in the c-vol log for 4ef6a14a-3fce-417f-aa13-5aab1789436e, there are warnings about quotas but those are probably not related since they are usage deprecation warnings.

The cinder-api log has a lot of traces of this though:

2014-07-24 00:37:57.499 21230 ERROR cinder.volume.volume_types [req-fb6bda10-61d3-4cee-9855-0b602eb23eb7 7a9c74f73e4645d48d83d22735da6498 a7b5b2c20c324abe8d81ed6a87732731 - - -] Default volume type is not found, please check default_volume_type config: Volume type with name lvm could not be found.
2014-07-24 00:37:57.499 21230 TRACE cinder.volume.volume_types Traceback (most recent call last):
2014-07-24 00:37:57.499 21230 TRACE cinder.volume.volume_types File "/opt/stack/new/cinder/cinder/volume/volume_types.py", line 128, in get_default_volume_type
2014-07-24 00:37:57.499 21230 TRACE cinder.volume.volume_types vol_type = get_volume_type_by_name(ctxt, name)
2014-07-24 00:37:57.499 21230 TRACE cinder.volume.volume_types File "/opt/stack/new/cinder/cinder/volume/volume_types.py", line 117, in get_volume_type_by_name
2014-07-24 00:37:57.499 21230 TRACE cinder.volume.volume_types return db.volume_type_get_by_name(context, name)
2014-07-24 00:37:57.499 21230 TRACE cinder.volume.volume_types File "/opt/stack/new/cinder/cinder/db/api.py", line 379, in volume_type_get_by_name
2014-07-24 00:37:57.499 21230 TRACE cinder.volume.volume_types return IMPL.volume_type_get_by_name(context, name)
2014-07-24 00:37:57.499 21230 TRACE cinder.volume.volume_types File "/opt/stack/new/cinder/cinder/db/sqlalchemy/api.py", line 161, in wrapper
2014-07-24 00:37:57.499 21230 TRACE cinder.volume.volume_types return f(*args, **kwargs)
2014-07-24 00:37:57.499 21230 TRACE cinder.volume.volume_types File "/opt/stack/new/cinder/cinder/db/sqlalchemy/api.py", line 1836, in volume_type_get_by_name
2014-07-24 00:37:57.499 21230 TRACE cinder.volume.volume_types return _volume_type_get_by_name(context, name)
2014-07-24 00:37:57.499 21230 TRACE cinder.volume.volume_types File "/opt/stack/new/cinder/cinder/db/sqlalchemy/api.py", line 161, in wrapper
2014-07-24 00:37:57.499 21230 TRACE cinder.volume.volume_types return f(*args, **kwargs)
2014-07-24 00:37:57.499 21230 TRACE cinder.volume.volume_types File "/opt/stack/new/cinder/cinder/db/sqlalchemy/api.py", line 1827, in _volume_type_get_by_name
2014-07-24 00:37:57.499 21230 TRACE cinder.volume.volume_types raise exception.VolumeTypeNotFoundByName(volume_type_name=name)
2014-07-24 00:37:57.499 21230 TRACE cinder.volume.volume_types VolumeTypeNotFoundByName: Volume type with name lvm could not be found.
2014-07-24 00:37:57.499 21230 TRACE cinder.volume.volume_types

I'm not sure if that's related, or somehow causing things to slow down, but it looks bad for sure.

Revision history for this message
Matt Riedemann (mriedem) wrote :

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRmlsZSBcXFwidGVtcGVzdC9zY2VuYXJpby90ZXN0X2VuY3J5cHRlZF9jaW5kZXJfdm9sdW1lcy5weVxcXCJcIiBBTkQgbWVzc2FnZTpcImluIHRlc3RfZW5jcnlwdGVkX2NpbmRlcl92b2x1bWVzX2NyeXB0c2V0dXBcIiBBTkQgdGFnczpcImNvbnNvbGVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNjIxNDQ1NzQ2NH0=

132 hits in 7 days, all failures, multiple queues.

The fingerprint isn't great but we don't have a trace in tempest.txt log and we don't have any good errors in the cinder logs that appear to be related to this, so it just seems like a routine timeout that we'll have to manage somehow.

Revision history for this message
Matt Riedemann (mriedem) wrote :

According to logstash this started showing up on 7/15.

Revision history for this message
Matt Riedemann (mriedem) wrote :

The timing of this merging lines up with 7/15 but it doesn't touch the lvm driver so probably unrelated:

https://review.openstack.org/#/c/99782/

Revision history for this message
Matt Riedemann (mriedem) wrote :

Current theory after talking with Jay Bryant is something changed in nova on 7/15 which broke this because the attach could be failing silently on the nova side and we are just timing out waiting for a state change that is never going to happen.

Revision history for this message
Matt Riedemann (mriedem) wrote :
Download full text (5.1 KiB)

Bingo:

http://logs.openstack.org/15/109115/1/check/check-tempest-dsvm-full/168a5dd/logs/screen-n-cpu.txt.gz?level=TRACE#_2014-07-24_00_57_06_425

2014-07-24 00:57:06.425 ERROR oslo.messaging.rpc.dispatcher [req-cd0800cd-de93-40f4-86ca-9f2ac3a1fe88 TestEncryptedCinderVolumes-1712166565 TestEncryptedCinderVolumes-692029700] Exception during message handling: <type 'NoneType'> can't be decoded
2014-07-24 00:57:06.425 21003 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
2014-07-24 00:57:06.425 21003 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply
2014-07-24 00:57:06.425 21003 TRACE oslo.messaging.rpc.dispatcher incoming.message))
2014-07-24 00:57:06.425 21003 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch
2014-07-24 00:57:06.425 21003 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args)
2014-07-24 00:57:06.425 21003 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch
2014-07-24 00:57:06.425 21003 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args)
2014-07-24 00:57:06.425 21003 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/compute/manager.py", line 406, in decorated_function
2014-07-24 00:57:06.425 21003 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs)
2014-07-24 00:57:06.425 21003 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/exception.py", line 88, in wrapped
2014-07-24 00:57:06.425 21003 TRACE oslo.messaging.rpc.dispatcher payload)
2014-07-24 00:57:06.425 21003 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-07-24 00:57:06.425 21003 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2014-07-24 00:57:06.425 21003 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/exception.py", line 71, in wrapped
2014-07-24 00:57:06.425 21003 TRACE oslo.messaging.rpc.dispatcher return f(self, context, *args, **kw)
2014-07-24 00:57:06.425 21003 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/compute/manager.py", line 291, in decorated_function
2014-07-24 00:57:06.425 21003 TRACE oslo.messaging.rpc.dispatcher pass
2014-07-24 00:57:06.425 21003 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-07-24 00:57:06.425 21003 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2014-07-24 00:57:06.425 21003 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/compute/manager.py", line 277, in decorated_function
2014-07-24 00:57:06.425 21003 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs)
2014-07-24 00:57:06.425 21003 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new...

Read more...

Revision history for this message
Matt Riedemann (mriedem) wrote :

I would like to blame this, but not sure yet https://review.openstack.org/#/c/91722/.

Revision history for this message
Matt Riedemann (mriedem) wrote :
Revision history for this message
Matt Riedemann (mriedem) wrote :

This is the query I'm using now to hit both the attach error and detach error:

message:"TestEncryptedCinderVolumes" AND ((message:"detach_volume" AND message:"TypeError: <type 'NoneType'> can't be decoded") OR (message:"attach_volume" AND message:"DeviceIsBusy: The supplied device \(vdb\) is busy")) AND tags:"screen-n-cpu.txt"

Revision history for this message
Matt Riedemann (mriedem) wrote :

There were 3 changes merged to nova's libvirt driver on 7/18 related to volumes which might be interesting:

https://github.com/openstack/nova/commit/54458334136b284bb0c45373e7cacf5c1fa0ab99
https://github.com/openstack/nova/commit/d19c75c19d2de8b20e82e6de9413ba53671ad7fb
https://github.com/openstack/nova/commit/4fc6f87af0399e9f2b8042629eecbd1f804ff7d7

The first is a logging change, so that shouldn't be an issue.

4fc6f87af0399e9f2b8042629eecbd1f804ff7d7 doesn't seem related since that should only be hit when updating host stats.

Given d19c75c19d2de8b20e82e6de9413ba53671ad7fb is messing with the attach/detach volume flows with respect to the bdm connection info, and that's what we're seeing hit encrypt/decrypt failures, that'd be the one I'd probably focus on as introducing the bug.

Revision history for this message
Ihar Hrachyshka (ihar-hrachyshka) wrote :

This fails in icehouse too so you can narrow the guilty list a bit: https://review.openstack.org/#/c/102067/

Tracy Jones (tjones-i)
tags: added: volumes
tags: added: testing
Revision history for this message
Joe Gordon (jogo) wrote :

no hits anymore looks like this was resolved somehow

Changed in nova:
status: New → Fix Committed
Thierry Carrez (ttx)
Changed in nova:
milestone: none → juno-3
status: Fix Committed → Fix Released
Changed in cinder:
status: New → Fix Committed
Revision history for this message
Matt Riedemann (mriedem) wrote :
Changed in nova:
milestone: juno-3 → none
no longer affects: cinder
Changed in nova:
status: Fix Released → New
status: New → Confirmed
Revision history for this message
Matt Riedemann (mriedem) wrote :

Bug 1374458 is a race fail to detach an encrypted volume.

Revision history for this message
Matt Riedemann (mriedem) wrote :
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (master)

Related fix proposed to branch: master
Review: https://review.openstack.org/124448

Matt Riedemann (mriedem)
tags: added: icehouse-backport-potential
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (master)

Reviewed: https://review.openstack.org/124448
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=6c7d4c91911ccb2c79d293302cd944a24079af2f
Submitter: Jenkins
Branch: master

commit 6c7d4c91911ccb2c79d293302cd944a24079af2f
Author: Matt Riedemann <email address hidden>
Date: Fri Sep 26 08:46:17 2014 -0700

    Log original error when attaching volume fails

    We have a race in the gate to attach an encrypted volume and the only
    thing we see in the logs is the DeviceIsBusy error which makes this hard
    to debug.

    This change logs the original error so we can dig deeper into the root
    cause.

    Related-Bug: #1348204

    Change-Id: I7d0f2571e80a4e55133f823d2a04feaf4dddf2e5

Revision history for this message
Matt Riedemann (mriedem) wrote :
Download full text (5.2 KiB)

This is new to me, and I'm not sure if the logging change is causing it:

http://logs.openstack.org/31/124231/3/gate/gate-tempest-dsvm-full/b860d61/logs/screen-n-cpu.txt.gz?level=TRACE#_2014-09-27_14_47_26_507

2014-09-27 14:47:26.507 ERROR oslo.messaging.rpc.dispatcher [req-e79201b2-ad39-4f1e-9d45-7c6daf90af52 TestEncryptedCinderVolumes-703905565 TestEncryptedCinderVolumes-2029807785] Exception during message handling: <type 'NoneType'> can't be decoded
2014-09-27 14:47:26.507 25061 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
2014-09-27 14:47:26.507 25061 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply
2014-09-27 14:47:26.507 25061 TRACE oslo.messaging.rpc.dispatcher incoming.message))
2014-09-27 14:47:26.507 25061 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch
2014-09-27 14:47:26.507 25061 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args)
2014-09-27 14:47:26.507 25061 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch
2014-09-27 14:47:26.507 25061 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args)
2014-09-27 14:47:26.507 25061 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/compute/manager.py", line 414, in decorated_function
2014-09-27 14:47:26.507 25061 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs)
2014-09-27 14:47:26.507 25061 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/exception.py", line 88, in wrapped
2014-09-27 14:47:26.507 25061 TRACE oslo.messaging.rpc.dispatcher payload)
2014-09-27 14:47:26.507 25061 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-09-27 14:47:26.507 25061 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2014-09-27 14:47:26.507 25061 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/exception.py", line 71, in wrapped
2014-09-27 14:47:26.507 25061 TRACE oslo.messaging.rpc.dispatcher return f(self, context, *args, **kw)
2014-09-27 14:47:26.507 25061 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/compute/manager.py", line 298, in decorated_function
2014-09-27 14:47:26.507 25061 TRACE oslo.messaging.rpc.dispatcher pass
2014-09-27 14:47:26.507 25061 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-09-27 14:47:26.507 25061 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2014-09-27 14:47:26.507 25061 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/compute/manager.py", line 284, in decorated_function
2014-09-27 14:47:26.507 25061 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs)
2014-09-27 14:47:26.507 ...

Read more...

Revision history for this message
Matt Riedemann (mriedem) wrote :

I see the audit log here:

http://logs.openstack.org/31/124231/3/gate/gate-tempest-dsvm-full/b860d61/logs/screen-n-cpu.txt.gz#_2014-09-27_14_47_25_962

2014-09-27 14:47:25.962 AUDIT nova.compute.manager [req-e79201b2-ad39-4f1e-9d45-7c6daf90af52 TestEncryptedCinderVolumes-703905565 TestEncryptedCinderVolumes-2029807785] [instance: f7728e87-46dd-4c39-b25e-a2f1e1e6a8d1] Detach volume 0754b797-6d3a-4574-bcf6-fb0efb55ef92 from mountpoint /dev/vdb

bdm.connection_info is nullable, but I'm not sure why it'd be None in this case.

Revision history for this message
Matt Riedemann (mriedem) wrote :

This looks like what we'd want if bdm.connection_info is None, but this is down in the virt driver code:

http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/block_device.py?id=2014.2.b3#n271

There is a lot of magic happening here and it's not clear to me what this is used for or why this isn't in the compute manager. I'm wondering if the block_device wrapper code in the virt driver is setting bdm.connection_info to None and updating that in the database in one part of the code and then when nova.compute.manager.detach_volume is called later it's None and we fail.

I'll push up some diagnostic logging patches to try and see what's going on.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (master)

Related fix proposed to branch: master
Review: https://review.openstack.org/125129

Sean Dague (sdague)
Changed in nova:
importance: Undecided → Critical
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on nova (master)

Change abandoned by Matt Riedemann (<email address hidden>) on branch: master
Review: https://review.openstack.org/125129
Reason: eff it, we should probably just add a check in the code when we hit the bug to reload the connection_info (or ignore the error).

Revision history for this message
Matt Riedemann (mriedem) wrote :
Download full text (7.6 KiB)

Based on the new logging for the original libvirt error, I haven't seen these before:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRmFpbGVkIHRvIGF0dGFjaCB2b2x1bWUgYXQgbW91bnRwb2ludFwiIEFORCB0YWdzOlwic2NyZWVuLW4tY3B1LnR4dFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiJjdXN0b20iLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsiZnJvbSI6IjIwMTQtMTEtMDNUMjA6NTE6MjIrMDA6MDAiLCJ0byI6IjIwMTQtMTEtMTRUMjA6NTE6MjIrMDA6MDAiLCJ1c2VyX2ludGVydmFsIjoiMCJ9LCJzdGFtcCI6MTQxNTk5ODQ2NTA4MH0=

1. In this case, it looks like the connection to libvirt just drops:

2014-11-12 03:08:52.797 5908 WARNING nova.virt.libvirt.driver [-] Connection to libvirt lost: 0
2014-11-12 03:08:52.800 ERROR nova.virt.libvirt.driver [req-f3264cfc-36d6-41af-a3aa-2d913e16a343 TestEncryptedCinderVolumes-1650787953 TestEncryptedCinderVolumes-658618010] [instance: 14f6ba42-0e39-494d-a8d1-ca04f2843593] Failed to attach volume at mountpoint: /dev/vdb
2014-11-12 03:08:52.800 5908 TRACE nova.virt.libvirt.driver [instance: 14f6ba42-0e39-494d-a8d1-ca04f2843593] Traceback (most recent call last):
2014-11-12 03:08:52.800 5908 TRACE nova.virt.libvirt.driver [instance: 14f6ba42-0e39-494d-a8d1-ca04f2843593] File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 1420, in attach_volume
2014-11-12 03:08:52.800 5908 TRACE nova.virt.libvirt.driver [instance: 14f6ba42-0e39-494d-a8d1-ca04f2843593] virt_dom.attachDeviceFlags(conf.to_xml(), flags)
2014-11-12 03:08:52.800 5908 TRACE nova.virt.libvirt.driver [instance: 14f6ba42-0e39-494d-a8d1-ca04f2843593] File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
2014-11-12 03:08:52.800 5908 TRACE nova.virt.libvirt.driver [instance: 14f6ba42-0e39-494d-a8d1-ca04f2843593] result = proxy_call(self._autowrap, f, *args, **kwargs)
2014-11-12 03:08:52.800 5908 TRACE nova.virt.libvirt.driver [instance: 14f6ba42-0e39-494d-a8d1-ca04f2843593] File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
2014-11-12 03:08:52.800 5908 TRACE nova.virt.libvirt.driver [instance: 14f6ba42-0e39-494d-a8d1-ca04f2843593] rv = execute(f, *args, **kwargs)
2014-11-12 03:08:52.800 5908 TRACE nova.virt.libvirt.driver [instance: 14f6ba42-0e39-494d-a8d1-ca04f2843593] File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
2014-11-12 03:08:52.800 5908 TRACE nova.virt.libvirt.driver [instance: 14f6ba42-0e39-494d-a8d1-ca04f2843593] six.reraise(c, e, tb)
2014-11-12 03:08:52.800 5908 TRACE nova.virt.libvirt.driver [instance: 14f6ba42-0e39-494d-a8d1-ca04f2843593] File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
2014-11-12 03:08:52.800 5908 TRACE nova.virt.libvirt.driver [instance: 14f6ba42-0e39-494d-a8d1-ca04f2843593] rv = meth(*args, **kwargs)
2014-11-12 03:08:52.800 5908 TRACE nova.virt.libvirt.driver [instance: 14f6ba42-0e39-494d-a8d1-ca04f2843593] File "/usr/lib64/python2.7/site-packages/libvirt.py", line 439, in attachDeviceFlags
2014-11-12 03:08:52.800 5908 TRACE nova.virt.libvirt.driver [instance: 14f6ba42-0e39-494d-a8d1-ca04f2843593] if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self)
2014-11-12 03:08:52.800 5908 T...

Read more...

Revision history for this message
Joe Gordon (jogo) wrote :

19 hits hits in 10 days: http://status.openstack.org/elastic-recheck/#1348204

dropping this down from critical

Changed in nova:
importance: Critical → High
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.