[tempest] Randomize subnet CIDR to avoid test clashes

Bug #1893188 reported by Rodolfo Alonso
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
neutron
Fix Released
Low
Lajos Katona

Bug Description

Although each network is created with a different project_id, we still have errors like [1][2]:
"""
neutronclient.common.exceptions.BadRequest: Invalid input for operation: Requested subnet with cidr: 20.0.0.0/24 for network: 0eb8805e-8307-4c1e-86cd-6e764f6c4f9f overlaps with another subnet.
Neutron server returns request_ids: ['req-00b171b6-6d20-46d8-a859-b6e870034e7d']
"""

[1]https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0b7/745330/1/check/neutron-fullstack-with-uwsgi/0b75784/testr_results.html
[2]http://paste.openstack.org/show/797203/

Tags: gate-failure
Revision history for this message
Slawek Kaplonski (slaweq) wrote :

Interesting thing is that in this test it should be created only once. Why it tries to create it second time?

tags: added: gate-failure
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (master)

Fix proposed to branch: master
Review: https://review.opendev.org/749041

Changed in neutron:
assignee: nobody → Lajos Katona (lajos-katona)
status: New → In Progress
Changed in neutron:
importance: Undecided → Low
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on neutron (master)

Change abandoned by Lajos Katona (<email address hidden>) on branch: master
Review: https://review.opendev.org/749041

Revision history for this message
Lajos Katona (lajos-katona) wrote :

A similar issue reported by slaweq (see 10.14. CI meeting: http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-3/%23openstack-meeting-3.2020-10-14.log.html#t2020-10-14T15:31:49 )

https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_095/756678/5/check/neutron-fullstack-with-uwsgi/095d3c9/testr_results.html

I checked the logs for the failure:
http://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_095/756678/5/check/neutron-fullstack-with-uwsgi/095d3c9/controller/logs/dsvm-fullstack-logs/TestHAL3Agent.test_keepalived_multiple_sighups_does_not_forfeit_primary/neutron-server--2020-10-12--22-48-43-466296_log.txt

From this floating IP association was successful:

2020-10-12 22:50:27.267 123082 INFO neutron.db.l3_db [req-86ab0ec8-955c-4ce1-b9cf-9ce99f6a805b - - - - -] Floating IP 8dccf7d0-6b9c-43d4-8d7a-fb56ac31867f associated. External IP: 240.184.213.73, port: e83fb4c0-8cf0-4fc7-809b-3db0df017e27.

But after that mysql beacome unresponsive:
2020-10-12 22:50:36.846 123084 ERROR oslo_db.sqlalchemy.engines [req-d3cb5aef-1bc8-4cc5-b721-039ab2b9cd14 - - - - -] Database connection was found disconnected; reconnecting: oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')
[SQL: SELECT 1]

Revision history for this message
Lajos Katona (lajos-katona) wrote :

Not sure if it can be the root of all similar issues, but seems bad anyway

Revision history for this message
Slawek Kaplonski (slaweq) wrote :

I think that original issue in fullstack test was caused by slow node and slow processing of the API request by neutron. Due to that request lib retried http request and in that case "first" subnet was already created.
As I don't see such errors recently in the gate, I'm closing this bug now.

Changed in neutron:
status: In Progress → Incomplete
Revision history for this message
Lajos Katona (lajos-katona) wrote :
Changed in neutron:
status: Incomplete → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.