[infra] Sometimes tests fail because slave node is overloaded, need to increase CPU number

Bug #1494274 reported by Artem Panchenko
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Fuel for OpenStack
Fix Released
High
Fuel QA Team

Bug Description

Sometimes tests fail on CI because slave nodes with 1 CPU are overloaded, for example:

https://product-ci.infra.mirantis.net/job/7.0.system_test.ubuntu.cic_maintenance_mode/88/testReport/junit/%28root%29/cic_maintenance_mode_env/cic_maintenance_mode_env/

# OSTF logs:

2015-09-10 01:13:54 ERROR (nose_storage_plugin) fuel_health.tests.smoke.test_neutron_actions.TestNeutron.test_check_neutron_objects_creation
...
fuel_health.common.test_mixins: DEBUG: Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/fuel_health/common/test_mixins.py", line 177, in verify result = func(*args, **kwargs) File "/usr/lib/python2.6/site-packages/fuel_health/nmanager.py", line 698, in _create_floating_ip pool=floating_ips_pool[0].name) File "/usr/lib/python2.6/site-packages/novaclient/v1_1/floating_ips.py", line 44, in create return self._create("/os-floating-ips", {'pool': pool}, "floating_ip") File "/usr/lib/python2.6/site-packages/novaclient/base.py", line 100, in _create _resp, body = self.api.client.post(url, body=body) File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 492, in post return self._cs_request(url, 'POST', **kwargs) File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 467, in _cs_request resp, body = self._time_request(url, method, **kwargs) File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 440, in _time_request resp, body = self.request(url, method, **kwargs) File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 433, in request raise exceptions.from_response(resp, body, url, method) NotFound: The resource could not be found. (HTTP 404) (Request-ID: req-332dbcf2-f69b-4175-a3e7-a1526a22a605)

# Nova logs:

<179>Sep 10 01:13:54 node-5 nova-api 2015-09-10 01:13:54.678 11059 ERROR nova.api.openstack [req-332dbcf2-f69b-4175-a3e7-a1526a22a605 e3186f5f08fe43e8a2d1f8c7877b4230 28c5d2a54a50477584dd5d6c01e50935 - - -] Caught error: Floating ip not found for address 10.109.11.130.
...
2015-09-10 01:13:54.678 11059 TRACE nova.api.openstack FloatingIpNotFoundForAddress: Floating ip not found for address 10.109.11.130.
RESP BODY: {"floatingips": [{"floating_network_id": "090d42e3-9aef-40a6-8d22-4e08ba65a57d", "router_id": null, "fixed_ip_address": null, "floating_ip_address": "10.109.11.130", "tenant_id": "28c5d2a54a50477584dd5d6c01e50935", "status": "DOWN", "port_id": null, "id": "a6d3db0a-9f10-41f0-a3d2-aca2a1705021"}]}
...
<183>Sep 10 01:27:49 node-5 nova-api 2015-09-10 01:27:49.305 11059 DEBUG keystoneclient.session [req-0d182962-9901-469a-b905-8e1654db72b9 e3186f5f08fe43e8a2d1f8c7877b4230 28c
5d2a54a50477584dd5d6c01e50935 - - -] RESP: [200] date: Thu, 10 Sep 2015 10:53:36 GMT connection: close content-type: application/json; charset=UTF-8 content-length: 297 x-ope
nstack-request-id: req-943ed5ec-3a5f-4e3b-b8a8-c1b6a8b4b5bf
RESP BODY: {"floatingips": [{"floating_network_id": "090d42e3-9aef-40a6-8d22-4e08ba65a57d", "router_id": null, "fixed_ip_address": null, "floating_ip_address": "10.109.11.130", "tenant_id": "28c5d2a54a50477584dd5d6c01e50935", "status": "DOWN", "port_id": null, "id": "a6d3db0a-9f10-41f0-a3d2-aca2a1705021"}]}
 _http_log_response /usr/lib/python2.7/dist-packages/keystoneclient/session.py:224

As you can see Nova returned 404 error on request to create floating IP, but actually IP was allocated:

root@node-5:~# nova floating-ip-list
+--------------------------------------+---------------+-----------+----------+-----------+
| Id | IP | Server Id | Fixed IP | Pool |
+--------------------------------------+---------------+-----------+----------+-----------+
| a6d3db0a-9f10-41f0-a3d2-aca2a1705021 | 10.109.11.130 | - | - | net04_ext |
+--------------------------------------+---------------+-----------+----------+-----------+

Looks like Nova got a glitch because slave node was overloaded LA >2 on VMs with 1 CPU:

http://paste.openstack.org/show/454899/

So we need to increase CPUs number on VMs for tests from 1 to 2.

Revision history for this message
Igor Shishkin (teran) wrote :

Removing 7.0 milestone since the task is not related there anymore.

no longer affects: fuel/7.0.x
Dmitry Pyzhov (dpyzhov)
no longer affects: fuel/8.0.x
Dmitry Pyzhov (dpyzhov)
tags: added: area-ci
Revision history for this message
Aleksandra Fedorova (bookwar) wrote :

We have increased CPU for nodes in 7.0 cycle. PLease check if issue is still atual

Changed in fuel:
assignee: Fuel CI team (fuel-ci) → Fuel QA Team (fuel-qa)
status: Confirmed → New
Artem Roma (aroma-x)
Changed in fuel:
status: New → Confirmed
Revision history for this message
Andrey Sledzinskiy (asledzinskiy) wrote :

Move to fix committed because we increased CPU to 2 and didn't observe such problems

Changed in fuel:
status: Confirmed → Fix Committed
Changed in fuel:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.