e1000 with Intel Pro1000GT shows strange ICMP behaviour

Bug #1072472 reported by thomasM
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
linux (Ubuntu)
Expired
Low
Unassigned

Bug Description

Hi, I'am siting here for 5 day configuring 4 brandnew Lenovo TS130 Server with 12.10 Server, each having two additional Intel Pro1000GT. Syslog shows that e1000 drives is used. Interface comes up normaly but shows strange icmp echo_request (ping ) behaviour.
onboard em1
Oct 28 20:20:31 cnode4 kernel: [ 0.916024] e1000e: Intel(R) PRO/1000 Network Driver - 2.0.0-k
Oct 28 20:20:31 cnode4 kernel: [ 0.916031] e1000e: Copyright(c) 1999 - 2012 Intel Corporation.
Oct 28 20:20:31 cnode4 kernel: [ 0.916078] e1000e 0000:00:19.0: >setting latency timer to 64
Oct 28 20:20:31 cnode4 kernel: [ 0.916144] e1000e 0000:00:19.0: >Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode

one of ethx Pro1000GT
Oct 28 20:20:31 cnode4 kernel: [ 0.916187] e1000e 0000:00:19.0: >irq 43 for MSI/MSI-X
Oct 28 20:20:31 cnode4 kernel: [ 0.917102] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI
Oct 28 20:20:31 cnode4 kernel: [ 0.917106] e1000: Copyright (c) 1999-2006 Intel Corporation.

When I do "ping -v -i 0.2 172.16.0.3" the icmp roudtrip times starts on about 100ms, continously reducing about 1ms every packet until 1ms and then hopp again to the maximum......scary and strange
PING 172.16.0.3 (172.16.0.3) 56(84) bytes of data.
64 bytes from 172.16.0.3: icmp_req=1 ttl=64 time=11.2 ms
64 bytes from 172.16.0.3: icmp_req=2 ttl=64 time=10.7 ms
64 bytes from 172.16.0.3: icmp_req=3 ttl=64 time=9.73 ms
64 bytes from 172.16.0.3: icmp_req=4 ttl=64 time=8.71 ms
64 bytes from 172.16.0.3: icmp_req=5 ttl=64 time=7.73 ms
64 bytes from 172.16.0.3: icmp_req=6 ttl=64 time=6.73 ms
64 bytes from 172.16.0.3: icmp_req=7 ttl=64 time=105 ms
64 bytes from 172.16.0.3: icmp_req=8 ttl=64 time=104 ms
64 bytes from 172.16.0.3: icmp_req=9 ttl=64 time=103 ms
64 bytes from 172.16.0.3: icmp_req=10 ttl=64 time=102 ms
64 bytes from 172.16.0.3: icmp_req=11 ttl=64 time=101 ms
64 bytes from 172.16.0.3: icmp_req=12 ttl=64 time=101 ms
64 bytes from 172.16.0.3: icmp_req=13 ttl=64 time=100 ms

Do I see an counter-loop here???

The netwerk performance ist bad too, no wonder.
I changed the NIC, the cable, switchport and switch....still the same. In addtition a crosslink does not work totaly. I tried normal na cross cable. In all test mii-diag and mii-tool shows everything ist fine.
onboard nic is e1000e and works as expected, pinging far below 1ms roundtrip.

Do you say that this is a bug or is it a "new" feature I never heard about and how can I get rid of it.

Thanks for any answers

Revision history for this message
Ubuntu Foundations Team Bug Bot (crichton) wrote :

Thank you for taking the time to report this bug and helping to make Ubuntu better. It seems that your bug report is not filed about a specific source package though, rather it is just filed against Ubuntu in general. It is important that bug reports be filed about source packages so that people interested in the package can find the bugs about it. You can find some hints about determining what package your bug might be about at https://wiki.ubuntu.com/Bugs/FindRightPackage. You might also ask for help in the #ubuntu-bugs irc channel on Freenode.

To change the source package that this bug is filed about visit https://bugs.launchpad.net/ubuntu/+bug/1072472/+editstatus and add the package name in the text box next to the word Package.

[This is an automated message. I apologize if it reached you inappropriately; please just reply to this message indicating so.]

tags: added: bot-comment
Revision history for this message
thomasM (thomas-moerschel) wrote :

OK Problem found. Report can be closed.

The problem is the sandy bridge chipset (C206) and problems with irq handling. This problem exists since the first chipset generation and nobody seems to care about. There is definitly no workaround no an solution until today (even irqpoll does not work). All you can do is use PCIe instead of PCI. But I have actualy problems to use the PCIe1x Slot too.

I hope that all that read this did'nt need to search as long as I had to do.
If you want to read more about the reasons have a look in the Fedora Core Forum and search
for Sandy Bridge / irq nobody care / irq disabled

Changed in ubuntu:
status: New → Confirmed
Revision history for this message
penalvch (penalvch) wrote :

thomasM, thank you for reporting this bug to Ubuntu. Quantal reached EOL on May 16, 2014.
See this document for currently supported Ubuntu releases: https://wiki.ubuntu.com/Releases

Is this an issue in a supported release? If so, could you please execute the following command, as it will automatically gather debugging information, in a terminal:
apport-collect 1072472

affects: ubuntu → linux (Ubuntu)
Changed in linux (Ubuntu):
importance: Undecided → Low
status: Confirmed → Incomplete
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for linux (Ubuntu) because there has been no activity for 60 days.]

Changed in linux (Ubuntu):
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.