tempest fail due to neutron cache miss
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Fix Released
|
High
|
Aaron Rosen | ||
Havana |
Fix Released
|
High
|
Aaron Rosen |
Bug Description
Sounds like a regression caused by following commits:
1. https:/
2. https:/
The above patches causing 2 issues:
1. tempest.
2. Unable to delete those servers using nova delete.
Tempest and compute traceback here:
http://
The patch was to disable cache refresh in allocate_
The servers created using above test is left in ERROR state and we are unable to delete them. It is likely because the cache is disabled while deallocating the instance and/or port.
NOTE:
If we restore the @refresh_sync decorator against those methods and do not use the decorate in _get_instance_
description: | updated |
Changed in nova: | |
assignee: | nobody → Aaron Rosen (arosen) |
Changed in nova: | |
status: | New → In Progress |
importance: | Undecided → High |
Changed in nova: | |
milestone: | none → icehouse-1 |
Changed in nova: | |
status: | Fix Committed → Fix Released |
Changed in nova: | |
milestone: | icehouse-1 → 2014.1 |
From from nova-compute exception the error seemed to occur in the python- neutronclient:
File "/usr/local/ csi/share/ csi-nova. venv/lib/ python2. 6/site- packages/ nova/network/ neutronv2/ api.py" , line 136, in _get_available_ networks list_networks( **search_ opts).get( 'networks' , []) csi/share/ csi-nova. venv/lib/ python2. 6/site- packages/ neutronclient/ v2_0/client. py", line 108, in with_params instance, *args, **kwargs) csi/share/ csi-nova. venv/lib/ python2. 6/site- packages/ neutronclient/ v2_0/client. py", line 325, in list_networks csi/share/ csi-nova. venv/lib/ python2. 6/site- packages/ neutronclient/ v2_0/client. py", line 1198, in list extend( r[collection] )
nets = neutron.
File "/usr/local/
ret = self.function(
File "/usr/local/
**_params)
File "/usr/local/
res.
KeyError: 'networks'
Is there any chance we could also get the neutron-server logs? I'm still searching for what caused this though.