check_public_network_connectivity fails with timeout

Bug #1194026 reported by Avishay Traeger on 2013-06-24
This bug affects 14 people
Affects Status Importance Assigned to Milestone
Nachi Ueno
Nachi Ueno

Bug Description

2013-06-24 06:11:14.330 | ======================================================================
2013-06-24 06:11:14.330 | FAIL: tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_008_check_public_network_connectivity[gate,smoke]
2013-06-24 06:11:14.331 | ----------------------------------------------------------------------
2013-06-24 06:11:14.331 | _StringException: Traceback (most recent call last):
2013-06-24 06:11:14.331 | File "/opt/stack/new/tempest/tempest/scenario/", line 255, in test_008_check_public_network_connectivity
2013-06-24 06:11:14.331 | self._check_vm_connectivity(ip_address, ssh_login, private_key)
2013-06-24 06:11:14.331 | File "/opt/stack/new/tempest/tempest/scenario/", line 455, in _check_vm_connectivity
2013-06-24 06:11:14.331 | "reachable" % ip_address)
2013-06-24 06:11:14.331 | File "/usr/lib/python2.7/unittest/", line 420, in assertTrue
2013-06-24 06:11:14.331 | raise self.failureException(msg)
2013-06-24 06:11:14.331 | AssertionError: Timed out waiting for to become reachable

Jeremy Stanley (fungi) on 2013-06-24
affects: openstack-ci → tempest
David Kranz (david-kranz) wrote :

This has been happening a fair bit. logstash shows many hits for the error and they all seem to be in quantum runs where the non-quantum run of the test passed. So changing to neutron until proven otherwise.

affects: tempest → neutron
David Kranz (david-kranz) wrote :

This is failing the gate almost once per hour. Perhaps it should be given a somewhat high priority.

Clint Byrum (clint-fewbar) wrote :

47 rechecks, it definitely deserves some immediate attention!

Sean Dague (sdague) on 2013-07-09
Changed in neutron:
status: New → Confirmed
Changed in neutron:
importance: Undecided → High
Maru Newby (maru) on 2013-07-09
Changed in neutron:
assignee: nobody → Maru Newby (maru)
Nachi Ueno (nati-ueno) wrote :

This could be timing bug on neutron l3-agent.

I greped the l3-agent log using

In this sequence of test, we use twice times.
First one is for, seconde is for tempest test.

This is for first one

It working and succeed.

But for second time, it looks do nothing...

I'll keep checking.

Maru Newby (maru) on 2013-07-09
Changed in neutron:
assignee: Maru Newby (maru) → nobody
Thierry Carrez (ttx) wrote :

Would be great to nail this one down before the review rush to h2

Changed in neutron:
importance: High → Critical
milestone: none → havana-2
Changed in neutron:
assignee: nobody → Gary Kotton (garyk)
Nachi Ueno (nati-ueno) wrote :

I'm checking logs.
I compared success one and failed one.

( I grepped

q-srv log is same. but l3-agent log is quite different.

Also this error starts 6/24.
so I checked the commit
- which is before 6/24
- Modifies l3-agent

This commit looks related.. This commit may affect floating ip also.

Author: Jenkins <email address hidden>
Date: Wed May 29 14:40:59 2013 +0000

    Merge "Configurable external gateway modes"

I have no proof of this, also there is still no explanation why it happens 6/24 at high rate..
(we upgraded greenlet at 6/23, this may related .. but I'm not sure..)

I'll trying to test neutron without this commit.

Nachi Ueno (nati-ueno) wrote :


I think I could reproduce this issue (if the test script is not wrong...)

Here is test script and result.

I'm trying to fix the issue

Nachi Ueno (nati-ueno) wrote :

Sorry script is not copy correctly.

here is true one

Nachi Ueno (nati-ueno) wrote :

I think I found the possible cause of issue.


line starts from **** Get is RPC message log.
line starts from [[[[[ Process is when l3-agent start processing rpc messages.
(I added rpc message number for debugging)

so we expected all RPC call is executed serialized way.
However it is executed in random order!

We are using semaphore in eventlet, but it looks we should use queue instead of queue.
We should also combine RPC messages.
Because when we get concurrent request, l3-agent processes one by one.
so sometimes, the process for the new ip will be delayed long time.

Changed in tempest:
status: New → Incomplete
Nachi Ueno (nati-ueno) wrote :

I'm writing patch.

Changed in tempest:
status: Incomplete → Invalid
Changed in neutron:
status: Confirmed → In Progress
assignee: Gary Kotton (garyk) → Nachi Ueno (nati-ueno)

Submitter: Jenkins
Branch: master

commit 57e1fa32648a28ae83e3c3033258fe08de2e5fca
Author: Nachi Ueno <email address hidden>
Date: Fri Jul 12 12:21:46 2013 -0700

    Improve l3-agent performance and prevent races in it.

    Fixes bug 1194026
    Fixes bug 1200749

    Introduce a looping call for performing synchronization with
    neutron server.
    The sync will be performed only if router changes are notified
    via rpc. Only affected routers will be synchronized.

    Changes will be implemented by the l3 agent spawning a
    distinct greenthread for each router - iptables will
    be executed only once using iptables_manager.defer_apply_on.

    This patch will prevent the occurence of the following issues:
    - Out-of-order rpc message processing
    - Long processing time for router changes due to serial execution
    - Occasional and expected RPC blocks for long periods
    - Unnecessary processing of multiple requests

    Change-Id: I0978d1c38ac5c38c4548e5b1877857bb5cac3b81

Changed in neutron:
status: In Progress → Fix Committed
Thierry Carrez (ttx) on 2013-07-17
Changed in neutron:
status: Fix Committed → Fix Released
Nachi Ueno (nati-ueno) on 2013-07-18
Changed in neutron:
status: Fix Released → In Progress
milestone: havana-2 → havana-3
Nachi Ueno (nati-ueno) wrote :

hmm still I'm getting errors.
It looks previous patch improves success rate, but it still failing on some percentage

(fail case)

(pass case) still I'm getting errors.

(fail case)

(pass case)

Nachi Ueno (nati-ueno) wrote :

This is before the patch merged

Nachi Ueno (nati-ueno) wrote :

This is after the patch merged

Nachi Ueno (nati-ueno) wrote :

One findings from log is
Timing of "Got routers updated notification" looks delayed..

Nachi Ueno (nati-ueno) wrote :

OK New fix looks working 10 times without this failure.

On previous patch it was working 10 times, but after the test, I added _rcp_loop..
and the _rpc_loop looks stopped (or may be delayed to execute ) on the gate failure.

In this patch I remove _rpc_loop

Nachi Ueno (nati-ueno) wrote :

It looks like timeout of tempest side looks also short.
(it is 20 sec)

Changed in tempest:
status: Invalid → Confirmed
assignee: nobody → Nachi Ueno (nati-ueno)
importance: Undecided → High
Changed in tempest:
status: Confirmed → In Progress
Nachi Ueno (nati-ueno) on 2013-07-24
Changed in tempest:
milestone: none → havana-3

Submitter: Jenkins
Branch: master

commit 6d580bee934199c4935e4d754cbafb67cc859961
Author: Nachi Ueno <email address hidden>
Date: Wed Jul 24 10:58:11 2013 -0700

    Increase ping timeout on scenario testing

    Some gating may fail because ping timeout is short (20s).
    In this commit, we will increase up to 60s.

    - Added ping_timeout for compute config with default 60s
    - Replaced hardcorded ssh timeout value with ssh_timeout

    Fixes bug 1194026

    Change-Id: If4e64aff17fc9aea1b6de03c684dff145ef5e6f2

Changed in tempest:
status: In Progress → Fix Released
Nachi Ueno (nati-ueno) wrote :

Gating is going back to normal again.
(see graph

We could start vorting again.

I don't see this errors in 48 hours run.

so IMO, We can close this bug.
# if we found this bug again, let's re-open this one.

Changed in neutron:
importance: Critical → Medium
importance: Medium → Critical
status: In Progress → Fix Committed
Chmouel Boudjnah (chmouel) wrote :

@Nachi we just got a new one again in

Chmouel Boudjnah (chmouel) wrote :
tags: added: tempest
Thierry Carrez (ttx) on 2013-09-05
Changed in neutron:
status: Fix Committed → Fix Released
Thierry Carrez (ttx) on 2013-10-17
Changed in neutron:
milestone: havana-3 → 2013.2
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Duplicates of this bug

Other bug subscribers