ovs bridge flow table is dropped by unkown cause
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
neutron |
Fix Released
|
High
|
Rodolfo Alonso |
Bug Description
Hi,
My openstack has a provider network with ovs bridge is "provision", it has been running fine but found it is network breakdown after several hours,I found it's flow table is empty.
Is there a way to trace a bridge's flow table changement?
[root@cloud-
NXST_FLOW reply (xid=0x4):
[root@cloud-
NXST_FLOW reply (xid=0x4):
[root@cloud-
[root@cloud-
[root@cloud-
...
10.53.33.0/24 dev proTvision proto kernel scope link src 10.53.33.11
10.53.128.0/24 dev docker0 proto kernel scope link src 10.53.128.1
169.254.0.0/16 dev br-ex scope link metric 1055
169.254.0.0/16 dev provision scope link metric 1056
...
[root@cloud-
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000248a07
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STAS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(bond0): addr:24:
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
2(phy-provision): addr:76:
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
LOCAL(provision): addr:24:
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
OFPT_GET_
[root@cloud-
bond0: flags=5187<
inet6 fe80::268a:
ether 24:8a:07:55:41:e8 txqueuelen 1000 (Ethernet)
RX packets 93588032 bytes 39646246456 (36.9 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8655257217 bytes 27148795388 (25.2 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@cloud-
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 24:8a:07:55:41:e8
Active Aggregator Info:
Aggregator ID: 19
Number of ports: 2
Actor Key: 13
Partner Key: 11073
Partner Mac Address: 38:bc:01:c2:26:a1
Slave Interface: enp4s0f0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 24:8a:07:55:41:e8
Slave queue ID: 0
Aggregator ID: 19
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: 24:8a:07:55:41:e8
port key: 13
port priority: 255
port number: 1
port state: 61
details partner lacp pdu:
system priority: 32768
system mac address: 38:bc:01:c2:26:a1
oper key: 11073
port priority: 32768
port number: 43
port state: 61
Slave Interface: enp5s0f0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 24:8a:07:55:44:64
Slave queue ID: 0
Aggregator ID: 19
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: 24:8a:07:55:41:e8
port key: 13
port priority: 255
port number: 2
port state: 61
details partner lacp pdu:
system priority: 32768
system mac address: 38:bc:01:c2:26:a1
oper key: 11073
port priority: 32768
port number: 91
port state: 61
tags: | added: neutron-proactive-backport-potential |
tags: | removed: in-stable-newton in-stable-ocata neutron-proactive-backport-potential |
Changed in neutron: | |
assignee: | Arjun Baindur (abaindur) → Rodolfo Alonso (rodolfo-alonso-hernandez) |
tags: | added: neutron-proactive-backport-potential |
There are other 2 servers with same network configuration, but the issue hasn't appeared on those servers.