dmesg is spammed with tc mirred to Houston: device br-data is down
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Neutron Open vSwitch Charm |
New
|
Undecided
|
Unassigned |
Bug Description
I have deployed OpenStack Bionic/Ussuri with Mellanox ConnectX-5 ASAP2 HW offloading as described in [1] and with in-tree Mellanox drivers. dmesg log is spammed with records related to traffic control mirror/redirection around OVS 's br-data bridge interface. The interface is down indeed (as it's typically not supposed to send/receive any packets):
$ ip link show br-data
90: br-data: <BROADCAST,
link/ether 0c:42:a1:1f:a8:e6 brd ff:ff:ff:ff:ff:ff
AFAIK TC mirred is used by HW offloading [2], but this must be verified.
The symptoms:
$ dmesg -T
[Sun Oct 11 17:23:30 2020] net_ratelimit: 16 callbacks suppressed
[Sun Oct 11 17:23:30 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:30 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:30 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:31 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:31 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:31 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:31 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:31 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:32 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:32 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:35 2020] net_ratelimit: 16 callbacks suppressed
[Sun Oct 11 17:23:35 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:35 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:36 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:36 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:36 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:36 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:36 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:37 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:37 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:37 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:40 2020] net_ratelimit: 16 callbacks suppressed
[Sun Oct 11 17:23:40 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:41 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:41 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:41 2020] tc mirred to Houston: device br-data is down
[Sun Oct 11 17:23:41 2020] tc mirred to Houston: device br-data is down
Probably these messages are harmless but there are hundreds of them per minute:
$ dmesg | wc -l
8771
Bringing the interface up mutes the error messages.
[1] https:/
[2] https:/
In the example above the affected interface is a static one, bringing it up mutes the error messages. However there are interfaces that are dynamically created/deleted during the creation/deletion of trunk bridges in OVS by tenants using Openstack api (VLAN aware VMs trunk and subports). For these interfaces the same thing happens and there is no actual remedy to prevent to flood the kernel log.