create_and_list_trunk_subports rally scenario failed with timeouts
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
neutron |
Fix Released
|
High
|
Armando Migliaccio |
Bug Description
This happened once in Pike.
Traceback (most recent call last):
File "/opt/stack/
getattr(
File "/home/
trunk = self._create_
File "/opt/stack/
f = func(self, *args, **kwargs)
File "/home/
return self.clients(
File "/usr/local/
return self.post(
File "/usr/local/
headers=
File "/usr/local/
resp, replybody = self.httpclient
File "/usr/local/
return self.request(url, method, **kwargs)
File "/usr/local/
resp = super(SessionCl
File "/usr/local/
return self.session.
File "/usr/local/
return wrapped(*args, **kwargs)
File "/usr/local/
resp = send(**kwargs)
File "/usr/local/
raise exceptions.
ConnectTimeout: Request to http://
All four attempts failed the same way. Other scenarios worked just fine.
When you look at q-svc log, you see that all four POST requests for trunks took a very long time, for example, req-b98d273b-
The only major code between quota check and this notification is validation of the request. Requests are, granted, heavy loaded, with 125 subports per trunk, so it may have something to do with it.
tags: | added: gate-failure |
Changed in neutron: | |
importance: | Undecided → High |
status: | New → Confirmed |
As Armando pointed out, even in 'good' runs those scenarios take a lot of time, so we probably just hit the edge of the timeout allowed by neutronclient. We may optimize db operations when validating trunk requests, specifically, we could fetch all networks at once to validate MTU compatibility, offloading bulking to db native layer.
We have a similar code that bulk-fetches networks here: https:/ /github. com/openstack/ neutron/ blob/master/ neutron/ services/ trunk/rules. py#L191 that we may somewhat reuse.