iptables-restore calls fail acquiring 'xlock' with iptables from master

Bug #1712185 reported by Ihar Hrachyshka
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
neutron
Fix Released
Medium
Ihar Hrachyshka

Bug Description

This happens when you use iptables that includes https://git.netfilter.org/iptables/commit/?id=999eaa241212d3952ddff39a99d0d55a74e3639e (f.e. the one from latest RHEL repos)

neutron.tests.functional.agent.test_firewall.FirewallTestCase.test_established_connection_is_cut(IptablesFirewallDriver,without ipset)
--------------------------------------------------------------------------------------------------------------------------------------

Captured traceback:
~~~~~~~~~~~~~~~~~~~
    Traceback (most recent call last):
      File "neutron/tests/functional/agent/test_firewall.py", line 113, in setUp
        self.firewall.prepare_port_filter(self.src_port_desc)
      File "neutron/agent/linux/iptables_firewall.py", line 204, in prepare_port_filter
        return self.iptables.apply()
      File "neutron/agent/linux/iptables_manager.py", line 432, in apply
        return self._apply()
      File "neutron/agent/linux/iptables_manager.py", line 440, in _apply
        first = self._apply_synchronized()
      File "neutron/agent/linux/iptables_manager.py", line 539, in _apply_synchronized
        '
'.join(log_lines))
      File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
        self.force_reraise()
      File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
        six.reraise(self.type_, self.value, self.tb)
      File "neutron/agent/linux/iptables_manager.py", line 518, in _apply_synchronized
        run_as_root=True)
      File "neutron/agent/linux/utils.py", line 156, in execute
        raise ProcessExecutionError(msg, returncode=returncode)
    neutron.agent.linux.utils.ProcessExecutionError: Exit code: 4; Stdin: # Generated by iptables_manager
    *filter
    :neutron-filter-top - [0:0]
    :run.py-FORWARD - [0:0]
    :run.py-INPUT - [0:0]
    :run.py-OUTPUT - [0:0]
    :run.py-it-veth0bc5 - [0:0]
    :run.py-local - [0:0]
    :run.py-ot-veth0bc5 - [0:0]
    :run.py-sg-chain - [0:0]
    :run.py-sg-fallback - [0:0]
    -I FORWARD 1 -j neutron-filter-top
    -I FORWARD 2 -j run.py-FORWARD
    -I INPUT 1 -j run.py-INPUT
    -I OUTPUT 1 -j neutron-filter-top
    -I OUTPUT 2 -j run.py-OUTPUT
    -I neutron-filter-top 1 -j run.py-local
    -I run.py-FORWARD 1 -m physdev --physdev-out test-veth0bc5b8 --physdev-is-bridged -m comment --comment "Direct traffic from the VM interface to the security group chain." -j run.py-sg-chain
    -I run.py-FORWARD 2 -m physdev --physdev-in test-veth0bc5b8 --physdev-is-bridged -m comment --comment "Direct traffic from the VM interface to the security group chain." -j run.py-sg-chain
    -I run.py-INPUT 1 -m physdev --physdev-in test-veth0bc5b8 --physdev-is-bridged -m comment --comment "Direct incoming traffic from VM to the security group chain." -j run.py-ot-veth0bc5
    -I run.py-it-veth0bc5 1 -p ipv6-icmp -m icmp6 --icmpv6-type 130 -j RETURN
    -I run.py-it-veth0bc5 2 -p ipv6-icmp -m icmp6 --icmpv6-type 134 -j RETURN
    -I run.py-it-veth0bc5 3 -p ipv6-icmp -m icmp6 --icmpv6-type 135 -j RETURN
    -I run.py-it-veth0bc5 4 -p ipv6-icmp -m icmp6 --icmpv6-type 136 -j RETURN
    -I run.py-it-veth0bc5 5 -m state --state RELATED,ESTABLISHED -m comment --comment "Direct packets associated with a known session to the RETURN chain." -j RETURN
    -I run.py-it-veth0bc5 6 -m state --state INVALID -m comment --comment "Drop packets that appear related to an existing connection (e.g. TCP ACK/FIN) but do not have an entry in conntrack." -j DROP
    -I run.py-it-veth0bc5 7 -m comment --comment "Send unmatched traffic to the fallback chain." -j run.py-sg-fallback
    -I run.py-ot-veth0bc5 1 -s ::/128 -d ff02::/16 -p ipv6-icmp -m icmp6 --icmpv6-type 131 -m comment --comment "Allow IPv6 ICMP traffic." -j RETURN
    -I run.py-ot-veth0bc5 2 -s ::/128 -d ff02::/16 -p ipv6-icmp -m icmp6 --icmpv6-type 135 -m comment --comment "Allow IPv6 ICMP traffic." -j RETURN
    -I run.py-ot-veth0bc5 3 -s ::/128 -d ff02::/16 -p ipv6-icmp -m icmp6 --icmpv6-type 143 -m comment --comment "Allow IPv6 ICMP traffic." -j RETURN
    -I run.py-ot-veth0bc5 4 -p ipv6-icmp -m icmp6 --icmpv6-type 134 -m comment --comment "Drop IPv6 Router Advts from VM Instance." -j DROP
    -I run.py-ot-veth0bc5 5 -p ipv6-icmp -m comment --comment "Allow IPv6 ICMP traffic." -j RETURN
    -I run.py-ot-veth0bc5 6 -p udp -m udp --sport 546 --dport 547 -m comment --comment "Allow DHCP client traffic." -j RETURN
    -I run.py-ot-veth0bc5 7 -p udp -m udp --sport 547 --dport 546 -m comment --comment "Prevent DHCP Spoofing by VM." -j DROP
    -I run.py-ot-veth0bc5 8 -m state --state RELATED,ESTABLISHED -m comment --comment "Direct packets associated with a known session to the RETURN chain." -j RETURN
    -I run.py-ot-veth0bc5 9 -m state --state INVALID -m comment --comment "Drop packets that appear related to an existing connection (e.g. TCP ACK/FIN) but do not have an entry in conntrack." -j DROP
    -I run.py-ot-veth0bc5 10 -m comment --comment "Send unmatched traffic to the fallback chain." -j run.py-sg-fallback
    -I run.py-sg-chain 1 -m physdev --physdev-out test-veth0bc5b8 --physdev-is-bridged -m comment --comment "Jump to the VM specific chain." -j run.py-it-veth0bc5
    -I run.py-sg-chain 2 -m physdev --physdev-in test-veth0bc5b8 --physdev-is-bridged -m comment --comment "Jump to the VM specific chain." -j run.py-ot-veth0bc5
    -I run.py-sg-chain 3 -j ACCEPT
    -I run.py-sg-fallback 1 -m comment --comment "Default drop rule for unmatched traffic." -j DROP
    COMMIT
    # Completed by iptables_manager
    # Generated by iptables_manager
    *raw
    :run.py-OUTPUT - [0:0]
    :run.py-PREROUTING - [0:0]
    -I OUTPUT 1 -j run.py-OUTPUT
    -I PREROUTING 1 -j run.py-PREROUTING
    -I run.py-PREROUTING 1 -m physdev --physdev-in brq7a7f000b-b8 -m comment --comment "Set zone for -veth0bc5b8" -j CT --zone 1
    -I run.py-PREROUTING 2 -i brq7a7f000b-b8 -m comment --comment "Set zone for -veth0bc5b8" -j CT --zone 1
    -I run.py-PREROUTING 3 -m physdev --physdev-in test-veth0bc5b8 -m comment --comment "Set zone for -veth0bc5b8" -j CT --zone 1
    COMMIT
    # Completed by iptables_manager
    ; Stdout: ; Stderr: Another app is currently holding the xtables lock. Perhaps you want to use the -w option?

To stay on safe side, we should always call with -w (assuming it's supported by platform).

Changed in neutron:
importance: Undecided → Medium
status: New → Confirmed
tags: added: functional-tests
Changed in neutron:
assignee: nobody → Ihar Hrachyshka (ihar-hrachyshka)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (master)

Fix proposed to branch: master
Review: https://review.openstack.org/495974

Changed in neutron:
status: Confirmed → In Progress
Changed in neutron:
assignee: Ihar Hrachyshka (ihar-hrachyshka) → Jakub Libosvar (libosvar)
Changed in neutron:
assignee: Jakub Libosvar (libosvar) → Ihar Hrachyshka (ihar-hrachyshka)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (master)

Reviewed: https://review.openstack.org/495974
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=a521bf0393d33d6e69f59900942404c2b5c84d83
Submitter: Jenkins
Branch: master

commit a521bf0393d33d6e69f59900942404c2b5c84d83
Author: Ihar Hrachyshka <email address hidden>
Date: Mon Aug 21 12:15:25 2017 -0700

    Make use of -w argument for iptables calls

    Upstream iptables added support for -w ('wait') argument to
    iptables-restore. It makes the command grab a 'xlock' that guarantees
    that no two iptables calls will mess a table if called in parallel.
    [This somewhat resembles what we try to achieve with a file lock we
    grab in iptables manager's _apply_synchronized.]

    If two processes call to iptables-restore or iptables in parallel, the
    second call risks failing, returning error code = 4, and also printing
    the following error:

        Another app is currently holding the xtables lock. Perhaps you want
        to use the -w option?

    If we call to iptables / iptables-restore with -w though, it will wait
    for the xlock release before proceeding, and won't fail.

    Though the feature was added in iptables/master only and is not part of
    an official iptables release, it was already backported to RHEL 7.x
    iptables package, and so we need to adopt to it. At the same time, we
    can't expect any underlying platform to support the argument.

    A solution here is to call iptables-restore with -w when a regular call
    failed. Also, the patch adds -w to all iptables calls, in the iptables
    manager as well as in ipset-cleanup.

    Since we don't want to lock agent in case current xlock owner doesn't
    release it in reasonable time, we limit the time we wait to ~1/3 of
    report_interval, to give the agent some time to recover without
    triggering expensive fullsync.

    In the future, we may be able to get rid of our custom synchronization
    lock that we use in iptables manager. But this will require all
    supported platforms to get the feature in and will take some time.

    Closes-Bug: #1712185
    Change-Id: I94e54935df7c6caa2480eca19e851cb4882c0f8b

Changed in neutron:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (stable/pike)

Fix proposed to branch: stable/pike
Review: https://review.openstack.org/501312

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (stable/ocata)

Fix proposed to branch: stable/ocata
Review: https://review.openstack.org/501317

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (stable/newton)

Fix proposed to branch: stable/newton
Review: https://review.openstack.org/501318

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (stable/pike)

Reviewed: https://review.openstack.org/501312
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=da215ba3dda27df86834e40a23eb79e87876c931
Submitter: Jenkins
Branch: stable/pike

commit da215ba3dda27df86834e40a23eb79e87876c931
Author: Ihar Hrachyshka <email address hidden>
Date: Mon Aug 21 12:15:25 2017 -0700

    Make use of -w argument for iptables calls

    Upstream iptables added support for -w ('wait') argument to
    iptables-restore. It makes the command grab a 'xlock' that guarantees
    that no two iptables calls will mess a table if called in parallel.
    [This somewhat resembles what we try to achieve with a file lock we
    grab in iptables manager's _apply_synchronized.]

    If two processes call to iptables-restore or iptables in parallel, the
    second call risks failing, returning error code = 4, and also printing
    the following error:

        Another app is currently holding the xtables lock. Perhaps you want
        to use the -w option?

    If we call to iptables / iptables-restore with -w though, it will wait
    for the xlock release before proceeding, and won't fail.

    Though the feature was added in iptables/master only and is not part of
    an official iptables release, it was already backported to RHEL 7.x
    iptables package, and so we need to adopt to it. At the same time, we
    can't expect any underlying platform to support the argument.

    A solution here is to call iptables-restore with -w when a regular call
    failed. Also, the patch adds -w to all iptables calls, in the iptables
    manager as well as in ipset-cleanup.

    Since we don't want to lock agent in case current xlock owner doesn't
    release it in reasonable time, we limit the time we wait to ~1/3 of
    report_interval, to give the agent some time to recover without
    triggering expensive fullsync.

    In the future, we may be able to get rid of our custom synchronization
    lock that we use in iptables manager. But this will require all
    supported platforms to get the feature in and will take some time.

    Closes-Bug: #1712185
    Change-Id: I94e54935df7c6caa2480eca19e851cb4882c0f8b
    (cherry picked from commit a521bf0393d33d6e69f59900942404c2b5c84d83)

tags: added: in-stable-pike
tags: added: in-stable-newton
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (stable/newton)

Reviewed: https://review.openstack.org/501318
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=ae8e37ef6ce254a070d7908fc803baa24e481344
Submitter: Jenkins
Branch: stable/newton

commit ae8e37ef6ce254a070d7908fc803baa24e481344
Author: Ihar Hrachyshka <email address hidden>
Date: Mon Aug 21 12:15:25 2017 -0700

    Make use of -w argument for iptables calls

    Upstream iptables added support for -w ('wait') argument to
    iptables-restore. It makes the command grab a 'xlock' that guarantees
    that no two iptables calls will mess a table if called in parallel.
    [This somewhat resembles what we try to achieve with a file lock we
    grab in iptables manager's _apply_synchronized.]

    If two processes call to iptables-restore or iptables in parallel, the
    second call risks failing, returning error code = 4, and also printing
    the following error:

        Another app is currently holding the xtables lock. Perhaps you want
        to use the -w option?

    If we call to iptables / iptables-restore with -w though, it will wait
    for the xlock release before proceeding, and won't fail.

    Though the feature was added in iptables/master only and is not part of
    an official iptables release, it was already backported to RHEL 7.x
    iptables package, and so we need to adopt to it. At the same time, we
    can't expect any underlying platform to support the argument.

    A solution here is to call iptables-restore with -w when a regular call
    failed. Also, the patch adds -w to all iptables calls, in the iptables
    manager as well as in ipset-cleanup.

    Since we don't want to lock agent in case current xlock owner doesn't
    release it in reasonable time, we limit the time we wait to ~1/3 of
    report_interval, to give the agent some time to recover without
    triggering expensive fullsync.

    In the future, we may be able to get rid of our custom synchronization
    lock that we use in iptables manager. But this will require all
    supported platforms to get the feature in and will take some time.

    Conflicts:
          neutron/agent/linux/iptables_manager.py
          neutron/cmd/ipset_cleanup.py

    Closes-Bug: #1712185
    Change-Id: I94e54935df7c6caa2480eca19e851cb4882c0f8b
    (cherry picked from commit a521bf0393d33d6e69f59900942404c2b5c84d83)

tags: added: in-stable-ocata
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (stable/ocata)

Reviewed: https://review.openstack.org/501317
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=0ac2d9d25b567dd1cfd02832ae39a546b6205a93
Submitter: Jenkins
Branch: stable/ocata

commit 0ac2d9d25b567dd1cfd02832ae39a546b6205a93
Author: Ihar Hrachyshka <email address hidden>
Date: Mon Aug 21 12:15:25 2017 -0700

    Make use of -w argument for iptables calls

    Upstream iptables added support for -w ('wait') argument to
    iptables-restore. It makes the command grab a 'xlock' that guarantees
    that no two iptables calls will mess a table if called in parallel.
    [This somewhat resembles what we try to achieve with a file lock we
    grab in iptables manager's _apply_synchronized.]

    If two processes call to iptables-restore or iptables in parallel, the
    second call risks failing, returning error code = 4, and also printing
    the following error:

        Another app is currently holding the xtables lock. Perhaps you want
        to use the -w option?

    If we call to iptables / iptables-restore with -w though, it will wait
    for the xlock release before proceeding, and won't fail.

    Though the feature was added in iptables/master only and is not part of
    an official iptables release, it was already backported to RHEL 7.x
    iptables package, and so we need to adopt to it. At the same time, we
    can't expect any underlying platform to support the argument.

    A solution here is to call iptables-restore with -w when a regular call
    failed. Also, the patch adds -w to all iptables calls, in the iptables
    manager as well as in ipset-cleanup.

    Since we don't want to lock agent in case current xlock owner doesn't
    release it in reasonable time, we limit the time we wait to ~1/3 of
    report_interval, to give the agent some time to recover without
    triggering expensive fullsync.

    In the future, we may be able to get rid of our custom synchronization
    lock that we use in iptables manager. But this will require all
    supported platforms to get the feature in and will take some time.

    Conflicts:
          neutron/agent/linux/iptables_manager.py
          neutron/cmd/ipset_cleanup.py

    Closes-Bug: #1712185
    Change-Id: I94e54935df7c6caa2480eca19e851cb4882c0f8b
    (cherry picked from commit a521bf0393d33d6e69f59900942404c2b5c84d83)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/neutron 10.0.4

This issue was fixed in the openstack/neutron 10.0.4 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/neutron 11.0.1

This issue was fixed in the openstack/neutron 11.0.1 release.

Revision history for this message
J-P methot (jphilippemethot) wrote :

We're running openstack Ocata on CentOS 7 in production (neutron 10.0.3-1). I have applied the patch provided, however, the bug still happens. Is the fix dependent on any other change to the code that may not be in the neutron 10.0.3-1.el7 package?

Additionally, beside the error messages in the l3 logs, do we know if this bug causes any other unwanted openstack behavior?

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (master)

Related fix proposed to branch: master
Review: https://review.openstack.org/510988

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (master)

Reviewed: https://review.openstack.org/510988
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=2f0ffa998a6fb171109196df8c8d3ce9f9f5a215
Submitter: Jenkins
Branch: master

commit 2f0ffa998a6fb171109196df8c8d3ce9f9f5a215
Author: Ihar Hrachyshka <email address hidden>
Date: Tue Oct 10 11:21:03 2017 -0700

    iptables: don't log lock error if we haven't passed -w

    In this case, it's an expected error, and we retry again with -w.

    Related-Bug: #1712185
    Change-Id: I97bf3032b5cebcbce51a3b3de6cb128ca342bd87

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/pike)

Related fix proposed to branch: stable/pike
Review: https://review.openstack.org/514428

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/ocata)

Related fix proposed to branch: stable/ocata
Review: https://review.openstack.org/514429

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/newton)

Related fix proposed to branch: stable/newton
Review: https://review.openstack.org/514430

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on neutron (stable/newton)

Change abandoned by Ihar Hrachyshka (<email address hidden>) on branch: stable/newton
Review: https://review.openstack.org/514430

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/pike)

Reviewed: https://review.openstack.org/514428
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=68879cec0b17af304a942bf1e5e5ec80eed513d9
Submitter: Zuul
Branch: stable/pike

commit 68879cec0b17af304a942bf1e5e5ec80eed513d9
Author: Ihar Hrachyshka <email address hidden>
Date: Tue Oct 10 11:21:03 2017 -0700

    iptables: don't log lock error if we haven't passed -w

    In this case, it's an expected error, and we retry again with -w.

    Related-Bug: #1712185
    Change-Id: I97bf3032b5cebcbce51a3b3de6cb128ca342bd87
    (cherry picked from commit 2f0ffa998a6fb171109196df8c8d3ce9f9f5a215)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/neutron 12.0.0.0b1

This issue was fixed in the openstack/neutron 12.0.0.0b1 development milestone.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (master)

Reviewed: https://review.openstack.org/513171
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=6c50ad5858175e2eddeed7a3e28cf8b53c2eb2b3
Submitter: Zuul
Branch: master

commit 6c50ad5858175e2eddeed7a3e28cf8b53c2eb2b3
Author: Brian Haley <email address hidden>
Date: Wed Oct 18 15:38:29 2017 -0400

    Always call iptables-restore with -w if done once

    In the case where we called iptables-restore with a
    -w argument and it succeeded, we should short-circuit
    future calls to always use -w, instead of trying
    without it, just to fall-back to using it on failure.

    While analyzing some l3-agent log files I have seen
    lots of "Perhaps you want to use the -w option?",
    followed by a call with -w, followed by not using it
    the next time. Changing this can save one failing
    call to iptables-restore.

    Change-Id: Icac99eb1d43648c64b6beaee0d6201f990eacb51
    Related-bug: #1712185

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/pike)

Related fix proposed to branch: stable/pike
Review: https://review.openstack.org/515488

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/ocata)

Reviewed: https://review.openstack.org/514429
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=4ed622eafc5568a526f9bd6177fd7d005b9066d1
Submitter: Zuul
Branch: stable/ocata

commit 4ed622eafc5568a526f9bd6177fd7d005b9066d1
Author: Ihar Hrachyshka <email address hidden>
Date: Tue Oct 10 11:21:03 2017 -0700

    iptables: don't log lock error if we haven't passed -w

    In this case, it's an expected error, and we retry again with -w.

    Related-Bug: #1712185
    Change-Id: I97bf3032b5cebcbce51a3b3de6cb128ca342bd87
    (cherry picked from commit 2f0ffa998a6fb171109196df8c8d3ce9f9f5a215)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/pike)

Reviewed: https://review.openstack.org/515488
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=5a65e974d5cf0b302a59cab41c7d3b537862ac75
Submitter: Zuul
Branch: stable/pike

commit 5a65e974d5cf0b302a59cab41c7d3b537862ac75
Author: Brian Haley <email address hidden>
Date: Wed Oct 18 15:38:29 2017 -0400

    Always call iptables-restore with -w if done once

    In the case where we called iptables-restore with a
    -w argument and it succeeded, we should short-circuit
    future calls to always use -w, instead of trying
    without it, just to fall-back to using it on failure.

    While analyzing some l3-agent log files I have seen
    lots of "Perhaps you want to use the -w option?",
    followed by a call with -w, followed by not using it
    the next time. Changing this can save one failing
    call to iptables-restore.

    Change-Id: Icac99eb1d43648c64b6beaee0d6201f990eacb51
    Related-bug: #1712185
    (cherry picked from commit 6c50ad5858175e2eddeed7a3e28cf8b53c2eb2b3)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (master)

Reviewed: https://review.openstack.org/513489
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=46081445d681074d5c598e725e0c568e1402dd14
Submitter: Zuul
Branch: master

commit 46081445d681074d5c598e725e0c568e1402dd14
Author: Brian Haley <email address hidden>
Date: Thu Oct 19 15:54:45 2017 -0400

    Change iptables-restore lock interval to 5 per second

    The default wait-interval for iptables-restore when
    using -w is 1 second between tries. On a busy system
    that could mean we timeout before we get the lock. Try
    5 times per second instead by using -W 200000.

    Change-Id: I8307db20187516be781e37c191d8f09a9a8e3dc3
    Related-bug: #1712185

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/pike)

Related fix proposed to branch: stable/pike
Review: https://review.openstack.org/519724

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/pike)

Reviewed: https://review.openstack.org/519724
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=c3896b6bda49e6cad320f93bd273a986d46b5ed9
Submitter: Zuul
Branch: stable/pike

commit c3896b6bda49e6cad320f93bd273a986d46b5ed9
Author: Brian Haley <email address hidden>
Date: Thu Oct 19 15:54:45 2017 -0400

    Change iptables-restore lock interval to 5 per second

    The default wait-interval for iptables-restore when
    using -w is 1 second between tries. On a busy system
    that could mean we timeout before we get the lock. Try
    5 times per second instead by using -W 200000.

    Change-Id: I8307db20187516be781e37c191d8f09a9a8e3dc3
    Related-bug: #1712185
    (cherry picked from commit 46081445d681074d5c598e725e0c568e1402dd14)

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.