test_volume_boot_pattern failed to get an instance into ACTIVE state
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Expired
|
Undecided
|
Unassigned |
Bug Description
2017-02-11 01:24:50.334 | Captured traceback:
2017-02-11 01:24:50.334 | ~~~~~~~~~~~~~~~~~~~
2017-02-11 01:24:50.334 | Traceback (most recent call last):
2017-02-11 01:24:50.334 | File "tempest/test.py", line 99, in wrapper
2017-02-11 01:24:50.334 | return f(self, *func_args, **func_kwargs)
2017-02-11 01:24:50.334 | File "tempest/
2017-02-11 01:24:50.334 | security_
2017-02-11 01:24:50.334 | File "tempest/
2017-02-11 01:24:50.334 | return self.create_
2017-02-11 01:24:50.334 | File "tempest/
2017-02-11 01:24:50.334 | image_id=image_id, **kwargs)
2017-02-11 01:24:50.334 | File "tempest/
2017-02-11 01:24:50.334 | server['id'])
2017-02-11 01:24:50.334 | File "/opt/stack/
2017-02-11 01:24:50.334 | self.force_
2017-02-11 01:24:50.334 | File "/opt/stack/
2017-02-11 01:24:50.334 | six.reraise(
2017-02-11 01:24:50.334 | File "tempest/
2017-02-11 01:24:50.334 | clients.
2017-02-11 01:24:50.334 | File "tempest/
2017-02-11 01:24:50.334 | raise lib_exc.
2017-02-11 01:24:50.334 | tempest.
2017-02-11 01:24:50.334 | Details: (TestVolumeBoot
The instance was created at:
2017-02-11 01:24:50.304 | 2017-02-11 01:17:58,462 26423 INFO [tempest.
Then the test spins on the instance waiting for it to transition into ACTIVE:
2017-02-11 01:24:50.305 | 2017-02-11 01:17:58,802 26423 INFO [tempest.
2017-02-11 01:24:50.305 | 2017-02-11 01:17:58,802 26423 DEBUG [tempest.
2017-02-11 01:24:50.305 | Body: None
2017-02-11 01:24:50.305 | Response - Headers: {u'connection': 'close', u'vary': 'X-OpenStack-
2017-02-11 01:24:50.305 | Body: {"server": {"OS-EXT-
The attempt to clean up the instance is at:
2017-02-11 01:24:50.316 | 2017-02-11 01:21:15,384 26423 INFO [tempest.
If we look into nova-api logs for the initial VM creation request, it's here:
http://
It seems like all went fine, the db record is created, all calls to neutron and cinder succeeded, and so we can expect e.g. nova-compute service to process the new instance. But when we search for the instance UUID in nova-cpu log, we can't find any. Conductor and scheduler logs don't reveal anything wrong, though debug output is hard to read, so maybe I miss something.
It all looks like, for some reason, nova instance request was not dispatched into nova-compute, so it never progressed past building stage.
Note the failure is in the old part of grenade, meaning it's with Ocata code.
tags: | added: gate-failure |
Changed in neutron: | |
status: | New → Confirmed |
Changed in nova: | |
status: | New → Confirmed |
Changed in neutron: | |
status: | Confirmed → Triaged |
status: | Triaged → Incomplete |
no longer affects: | neutron |
Changed in nova: | |
status: | Confirmed → Incomplete |
e-r-q for master grenade: http:// logstash. openstack. org/#dashboard/ file/logstash. json?query= message% 3A%5C%22TestVol umeBootPatternV 2%3Atest_ volume_ boot_pattern) %20Server% 20%5C%22% 20AND%20message %3A%5C% 22%20failed% 20to%20reach% 20ACTIVE% 20status% 20and%20task% 20state% 5C%22%20AND% 20message% 3A%5C%22Current %20status% 3A%20BUILD. %20Current% 20task% 20state% 3A%20spawning. %5C%22% 20AND%20tags% 3Agrenade. sh.txt% 20AND%20build_ branch% 3Amaster% 20AND%20build_ status% 3AFAILURE
e-r-q for stable ocata tempest: http:// logstash. openstack. org/#dashboard/ file/logstash. json?query= message% 3A%5C%22TestVol umeBootPatternV 2%3Atest_ volume_ boot_pattern) %20Server% 20%5C%22% 20AND%20message %3A%5C% 22%20failed% 20to%20reach% 20ACTIVE% 20status% 20and%20task% 20state% 5C%22%20AND% 20message% 3A%5C%22Current %20status% 3A%20BUILD. %20Current% 20task% 20state% 3A%20spawning. %5C%22% 20AND%20build_ branch% 3A%5C%22stable% 2Focata% 5C%22%20AND% 20build_ status% 3AFAILURE
As the smoke test runs on Ocata code, I expected to see failures also on ocata stable branch but maybe we don't have enough runs there to reproduce.