test_resize_server_confirm server failed to build

Bug #1213212 reported by Matthew Treinish
18
This bug affects 4 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Invalid
Medium
Unassigned

Bug Description

When running tempest in parallel occasionally test_resize_server_confirm fails to build the server and goes into an error state see:

2013-08-16 14:08:33.607 | ======================================================================
2013-08-16 14:08:33.607 | FAIL: tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_confirm[gate,smoke]
2013-08-16 14:08:33.607 | tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_confirm[gate,smoke]
2013-08-16 14:08:33.608 | ----------------------------------------------------------------------
2013-08-16 14:08:33.608 | _StringException: Empty attachments:
2013-08-16 14:08:33.608 | stderr
2013-08-16 14:08:33.609 | stdout
2013-08-16 14:08:33.609 |
2013-08-16 14:08:33.609 | Traceback (most recent call last):
2013-08-16 14:08:33.609 | File "tempest/api/compute/servers/test_server_actions.py", line 161, in test_resize_server_confirm
2013-08-16 14:08:33.609 | self.client.wait_for_server_status(self.server_id, 'VERIFY_RESIZE')
2013-08-16 14:08:33.609 | File "tempest/services/compute/json/servers_client.py", line 165, in wait_for_server_status
2013-08-16 14:08:33.609 | raise exceptions.BuildErrorException(server_id=server_id)
2013-08-16 14:08:33.610 | BuildErrorException: Server ed3c7212-f4b6-4365-91b8-bddddc9e1a60 failed to build and is in ERROR status
2013-08-16 14:08:33.610 |
2013-08-16 14:08:33.610 |
2013-08-16 14:08:33.611 | ======================================================================

A set of logs for this failure can be found here:
http://logs.openstack.org/63/42063/1/gate/gate-tempest-devstack-vm-testr-full/fa32f42/

Tags: testing
tags: added: testing
Revision history for this message
Bob Ball (bob-ball) wrote :

Seeing this very reproducibly at the moment in the Citrix CI (nearly 100%).
Not been able to track down the cause yet, but do not believe it's due to running in parallel as I don't think our tests are in parallel.

Revision history for this message
Sean Dague (sdague) wrote :

removing this as a tempest issue, as I don't think it actually is a bug in tempest, it's a nova state bug

Changed in nova:
status: New → Confirmed
importance: Undecided → Medium
no longer affects: tempest
Revision history for this message
Matthew Treinish (treinish) wrote :

Based on the age of the bug, and the lack of logs I don't think we'll be able to make progress on this. We can open up a new bug with more detail if we come across this again.

Changed in nova:
status: Confirmed → Invalid
Revision history for this message
Ihar Hrachyshka (ihar-hrachyshka) wrote :
Changed in nova:
status: Invalid → New
Revision history for this message
Joe Gordon (jogo) wrote :

stacktrace: http://logs.openstack.org/07/107107/1/check/check-grenade-dsvm/c88851d/logs/new/screen-n-cpu.txt.gz?level=TRACE#_2014-07-31_21_42_32_647

2014-07-31 21:42:32.647 17870 TRACE nova.compute.manager [instance: 9b534244-4fcd-425c-9d9b-3f201b88cf16] vif))
2014-07-31 21:42:32.647 17870 TRACE nova.compute.manager [instance: 9b534244-4fcd-425c-9d9b-3f201b88cf16] File "/opt/stack/new/nova/nova/virt/libvirt/firewall.py", line 233, in _define_filter
2014-07-31 21:42:32.647 17870 TRACE nova.compute.manager [instance: 9b534244-4fcd-425c-9d9b-3f201b88cf16] self._conn.nwfilterDefineXML(xml)
2014-07-31 21:42:32.647 17870 TRACE nova.compute.manager [instance: 9b534244-4fcd-425c-9d9b-3f201b88cf16] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 187, in doit
2014-07-31 21:42:32.647 17870 TRACE nova.compute.manager [instance: 9b534244-4fcd-425c-9d9b-3f201b88cf16] result = proxy_call(self._autowrap, f, *args, **kwargs)
2014-07-31 21:42:32.647 17870 TRACE nova.compute.manager [instance: 9b534244-4fcd-425c-9d9b-3f201b88cf16] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 146, in proxy_call
2014-07-31 21:42:32.647 17870 TRACE nova.compute.manager [instance: 9b534244-4fcd-425c-9d9b-3f201b88cf16] rv = execute(f, *args, **kwargs)
2014-07-31 21:42:32.647 17870 TRACE nova.compute.manager [instance: 9b534244-4fcd-425c-9d9b-3f201b88cf16] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 127, in execute
2014-07-31 21:42:32.647 17870 TRACE nova.compute.manager [instance: 9b534244-4fcd-425c-9d9b-3f201b88cf16] six.reraise(c, e, tb)
2014-07-31 21:42:32.647 17870 TRACE nova.compute.manager [instance: 9b534244-4fcd-425c-9d9b-3f201b88cf16] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 85, in tworker
2014-07-31 21:42:32.647 17870 TRACE nova.compute.manager [instance: 9b534244-4fcd-425c-9d9b-3f201b88cf16] rv = meth(*args, **kwargs)
2014-07-31 21:42:32.647 17870 TRACE nova.compute.manager [instance: 9b534244-4fcd-425c-9d9b-3f201b88cf16] File "/usr/lib/python2.7/dist-packages/libvirt.py", line 2651, in nwfilterDefineXML
2014-07-31 21:42:32.647 17870 TRACE nova.compute.manager [instance: 9b534244-4fcd-425c-9d9b-3f201b88cf16] if ret is None:raise libvirtError('virNWFilterDefineXML() failed', conn=self)
2014-07-31 21:42:32.647 17870 TRACE nova.compute.manager [instance: 9b534244-4fcd-425c-9d9b-3f201b88cf16] libvirtError: An error occurred, but the cause is unknown
2014-07-31 21:42:32.647 17870 TRACE nova.compute.manager [instance: 9b534244-4fcd-425c-9d9b-3f201b88cf16]

Revision history for this message
Joe Gordon (jogo) wrote :

I don't see any hits in logstash for: message:"An error occurred, but the cause is unknown" so looks like this may have been resolved. Marking as incompete

Changed in nova:
status: New → Incomplete
Sean Dague (sdague)
Changed in nova:
status: Incomplete → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.