Comment 7 for bug 1445684

Revision history for this message
Alexander Nevenchannyy (anevenchannyy) wrote :

Folks,

1) We need to increase txqueuelen at some bridges at least to 1000.
For example:
p_br-prv-0 Link encap:Ethernet HWaddr ca:a0:28:19:8d:bd
          inet6 addr: fe80::c8a0:28ff:fe19:8dbd/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:152526367 errors:0 dropped:7590012 overruns:0 frame:0
          TX packets:214336421 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:177639911221 (177.6 GB) TX bytes:256069399236 (256.0 GB)

For my configuration: p_br-prv-0, p_br-floating-0, eth2.140, br-storage, br-prv, br-int, br-fw-admin, br-floating, br-ex.

2) At high network load we have 1 CPU usage at 100% (ksoftiqrqd by net_rx function) , so we need to enable packet steering
for i in `seq 0 *CPU_NUM*`; do echo ff > /sys/class/net/eth2/queues/rx-$i/rps_cpus ; done
More info at: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Performance_Tuning_Guide/network-rps.html
3) As we spoke with Vova Kuklin need to increase net.core.netdev_max_backlog at least to 262144
4) At this moment 10Gb/s ethernet cards have txqueuelen 512, we must to setup this at maximum that support hardware.