[karmic] CPU load not being reported accurately

Bug #513848 reported by Peter Matulis
16
This bug affects 1 person
Affects Status Importance Assigned to Milestone
linux (Ubuntu)
Fix Released
Low
Chase Douglas
Declined for Jaunty by Stefan Bader
Karmic
Fix Released
Low
Chase Douglas

Bug Description

Overview:

Using java-based tools (tomcat) on 64-bit server Karmic reports almost non-existent CPU load. Screenshot ('load for 9.10') displays this. However, the same load applied to Jaunty shows a different report ('load for 9.04'). Note that stress tools applied to Karmic does result in an increase in load (as expected).

Reproducible: 100%

Workaround:

Use Jaunty.

Revision history for this message
Peter Matulis (petermatulis) wrote :
Revision history for this message
Peter Matulis (petermatulis) wrote :
description: updated
Revision history for this message
Peter Matulis (petermatulis) wrote :

Linux ewp1 2.6.31-14-server #48-Ubuntu SMP Fri Oct 16 15:07:34 UTC 2009 x86_64 GNU/Linux

Revision history for this message
Peter Matulis (petermatulis) wrote :
Revision history for this message
Surbhi Palande (csurbhi) wrote :

@ Peter Matulis, will it be possible to post a screenshot with the output of dmstat -cmdlni 5 for Karmic as well? Also can you post the screenshot with the output of top for jaunty ?

Revision history for this message
Chase Douglas (chasedouglas) wrote :

@Peter Matulis,

First, please edit the description for consistency. First you say that using tomcat on karmic does not produce an expected load. Then you say that "stress tools applied to Karmic does result in an increase in load (as expected)." I am guessing that you are meaning to say that Jaunty is reporting the load avg correctly, but karmic is not.

Load avg is a time-weighted average of the CPU process queue length. For example, if you have a load avg of 5, that means approximately 5 processes are ready to run on the CPU at any given moment. If you have a dual cpu machine, you are overloading it because you only have 2 cores. However, if you have an eight-core machine all processes may run, so you have not overloaded it.

One can easily test load avg results by running a simple test case. In bash, run the following:

  while true; do (( i++ )); done

This should run as one process. It is simple and does not need to wait for disk or anything else, so it's pretty much always runnable. If you run it for a minute you should see the 1 minute load average increase by about 1 (very rough, but if you wait 15 mins you should see the 15 min average stabilize better than the 1 minute average). If you were to run two shells like this you should see the load average increase by 2. I have tested this in karmic and found it to be the case.

Can you provide further details on why there is an issue with karmic? Also, can you verify that you are running the exact same tests on the exact same hardware and provide the same screenshot between karmic and jaunty? It's difficult to understand what the real differences are when the screenshots are not easily comparable.

Thanks

Changed in linux (Ubuntu):
status: New → Incomplete
Revision history for this message
Peter Matulis (petermatulis) wrote :

@Chase Douglas

The description is fine. On Karmic, using a generic stress tool ('stress') the simulated load is reported but the specific (real-life) load I refer to is not. So the problem is a subtle one.

Thank you for the tutorial on cpu load. A *zero* load, however, that corresponds to hundreds of running threads is enough to present a valid case. It doesn't matter how many cores I have.

@Surbhi

I will see what I can do to get identical comparisons.

Changed in linux (Ubuntu):
importance: Undecided → Low
Revision history for this message
Chase Douglas (chasedouglas) wrote :

The load avg is only related to the number of runnable/uninterruptible processes. There can be hundreds of threads and 0.00 load avg if they are all waiting or sleeping.

However, your screenshot shows consistent cpu utilization of 15-20%, so one would think the load avg would be at least 0.2 for at least the 1 minute mark. The load avg is calculated once every 5 seconds inside the kernel. At that point in time, the average is recomputed using the current number of runnable/uninterruptible processes. To have a load avg of 0.00, obviously there could not have been any runnable or uninterruptible processes in the last 5 minutes during every 5 second capture point.

I think it would be most helpful to get identical comparisons between the two distros, preferrably the dmstat output on karmic.

Revision history for this message
Peter Matulis (petermatulis) wrote :

I've uploaded comparable screenshots for both Jaunty and Karmic.

Revision history for this message
Peter Matulis (petermatulis) wrote :
Revision history for this message
Chase Douglas (chasedouglas) wrote :

In the jaunty image I can see both the load avg and about 15 mins of cpu usage. They seem to correlate well with each other. In the karmic image I can only see the load avg and one to two mins of cpu usage. Based on what I see, the reported load avg number for 5 mins and 1 min seem reasonable. The last one minute seems to average below 5% processor usage, so a load avg of 0.03 makes sense. I don't have enough data here to see whether the 15 min load avg is reasonable.

We need a longer graph of the cpu usage for karmic to determine whether the load avg numbers are reasonable. It would help if the processor usage were maintained higher as well, since we are trying to see a disparity from low load avg numbers with high cpu usage.

Revision history for this message
Peter Matulis (petermatulis) wrote :

Here are new screenshots for Jaunty and Karmic. Each test ran for a good 25 minutes. I've also added a Karmic screenshot for 'top' showing (some of) the threads associated with the single Java process. In addition, I have copied my recent post (2010-03-10) to <email address hidden> :

--------
Can anyone tell me whether the Karmic kernel has implemented a different
way of what it considers a process (as opposed to threads)?

I have a situation where CPU load is zero on Karmic but considerably
higher in Jaunty and earlier.

The scenario is a single java process with many threads. The system has
4 cores and only one java process should logically produce a negligible
CPU load but why was this not the case with earlier kernels? Has
something changed in Karmic that would explain what I'm seeing?
--------

Revision history for this message
Peter Matulis (petermatulis) wrote :
Revision history for this message
Peter Matulis (petermatulis) wrote :
Revision history for this message
Peter Matulis (petermatulis) wrote :
Revision history for this message
Peter Matulis (petermatulis) wrote :

I wish I could have gotten a higher CPU usage out of the Karmic test. Nonetheless, I believe I have a very valid case for discussion.

Revision history for this message
Chase Douglas (chasedouglas) wrote :

Are you able to test and reproduce this is on a range of hardware, or just a single model or machine?

Revision history for this message
tgabi (tgabi) wrote :

Greetings,

I'm the one who originally reported this bug.

@Chase Douglas
The bug is present on a variety of hardware: single CPU, dual CPU, AMD, Intel.

 One more info that has not been added to the description: when one of the Java threads goes into close loop (some bug in image processing) and uses 100% of one CPU core then we register +1 on system load. Also disk activity is registering on system load.

Changed in linux (Ubuntu):
status: Incomplete → New
Revision history for this message
Chase Douglas (chasedouglas) wrote :

I have uploaded a test kernel to http://people.canonical.com/~cndougla/513848. Install the kernel and run a normal load. It will print out the number of running and uninterruptible processes each time the load average is calculated (every 5 seconds) to the kernel log. Please take a screenshot with all the information as before and attach the dmesg from this kernel.

Thanks

Changed in linux (Ubuntu):
assignee: nobody → Chase Douglas (chasedouglas)
status: New → Incomplete
Tim Gardner (timg-tpi)
Changed in linux (Ubuntu):
status: Incomplete → In Progress
Revision history for this message
tgabi (tgabi) wrote :

Is that a desktop kernel ? I'm going to test anyway, but there may be differences

Revision history for this message
tgabi (tgabi) wrote :
Revision history for this message
Chase Douglas (chasedouglas) wrote :

@tgabi:

Sorry about the kernel type, I should have built a -server kernel for you. However, there shouldn't be any process statistics differences between -server and -generic kernels.

Thanks for the logging output. Unfortunately the dmstat output doesn't have any timestamps associated with it. Please add -t to the dstat command line to output timestamps as well. The dmstat command outputs data every second by default, so we only have 24 seconds of output from dstat and only 115 seconds of output in the dmesg. Based on the data in the log, everything seems correct to me, but we don't have much to go on.

To be able to better solve this issue we need the dstat output with time stamps and the dmesg output over a full 15 minute period. I will provide a new kernel build based on the -server kernel. If the debug output is too large, please use bzip2 to compress it before attaching.

Thanks

Revision history for this message
tgabi (tgabi) wrote :

I'll redo the test shortly. However is puzzling looking at something like this:

Mar 18 11:20:50 web41 kernel: [ 2059.210041] calc_global_load:3014 cpu: 0, nr_running: 2, nr_uninterruptible: 84
Mar 18 11:20:50 web41 kernel: [ 2059.210044] calc_global_load:3014 cpu: 1, nr_running: 0, nr_uninterruptible: 136
Mar 18 11:20:50 web41 kernel: [ 2059.210047] calc_global_load:3014 cpu: 2, nr_running: 1, nr_uninterruptible: 60
Mar 18 11:20:50 web41 kernel: [ 2059.210049] calc_global_load:3014 cpu: 3, nr_running: 3, nr_uninterruptible: -280
Mar 18 11:20:50 web41 kernel: [ 2059.210051] calc_global_load:3017 calc_load_tasks: 0

with 6 running tasks the calculated load is zero.

Revision history for this message
Chase Douglas (chasedouglas) wrote :

I've uploaded a -server kernel to http://people.canonical.com/~cndougla/513848 for testing.

Revision history for this message
Chase Douglas (chasedouglas) wrote :

@tgabi:

The two printouts are somewhat imprecise. The calc_load_tasks value is the exact value used for calculating the load avg you see in user space. I printed out the nr_running and nr_uninterruptible counts just for information.

The difference here is that the nr_* counts are the counts at the time of the printing. The calc_load_tasks count is only updated once every five seconds by each cpu, likely within the last 10 ticks of when I print it out. Thus, processes could be awakened sometime within the interval between when the calc_load_tasks is updated and when the kernel prints these messages out.

Now it is odd that none of the calc_load_tasks values are larger than the sum of the nr_* values, but with the limited data set in the log it could be coincidental. If we could load the system more for longer periods of time we would likely see variance both ways, assuming everything is working right.

Revision history for this message
tgabi (tgabi) wrote :

Well, unless the calc_load_tasks is integer it still doesn't make sense.

Revision history for this message
tgabi (tgabi) wrote :
Revision history for this message
tgabi (tgabi) wrote :
Revision history for this message
Chase Douglas (chasedouglas) wrote :

@tgabi:

calc_load_tasks is an integer [1]. It basically represents the latest data point used for load avg calculation and is only updated once by each cpu every 5 seconds.

[1] http://lxr.linux.no/linux+v2.6.33/kernel/sched.c#L2997

Revision history for this message
Chase Douglas (chasedouglas) wrote :

@tgabi:

In the log you just posted there seems to be way too many running tasks that are never accounted for in calc_load_tasks. I will try to figure out a reasonable way to debug what is going on.

Revision history for this message
Chase Douglas (chasedouglas) wrote :

The good news is that the dstat matches up perfectly with the calc_load_tasks values in dmesg, so we know that the issue is confined to calc_load_tasks not being representative of the number of running processes in the system.

Revision history for this message
tgabi (tgabi) wrote :

Tomcat is using a large number of java threads.

Revision history for this message
Chase Douglas (chasedouglas) wrote :

@tgabi:

I found that the calc_load_tasks counter is updated in two areas: once every 5 seconds before the load avg is calculated, and every time a cpu enters the idle task. The latter occurs very frequently if the system isn't loaded very much, which seems to be the case for your server. Every time the cpu enters the idle task it is almost always decrementing the count by at least one because it is going from executing a running task to having no running task available for scheduling (unless the task goes to the uninterruptible state, but that doesn't occur very often). Thus, the counter is almost always incremented only when the load avg is calculated once every 5 seconds.

So every 5 seconds each cpu updates the global calc_load_tasks counter. However, the load avg calculation is not calculated until 10 ticks later. I'm guessing the reason for the delay in the calculation is that each cpu has its own APIC timer which is used for scheduling ticks. If the calculation were done at the same time the calc_load_tasks counter was updated on one of the cpus, the other cpus may not have updated the global counter yet. The 10 tick delay ensures that all the processors have had a chance to update the global counter before the new load avg is calculated.

Here's where I think things are going "wrong" for you: the 5 second interval expires and the cpus update the global counter, likely incrementing the counter to the number of tasks currently runnable. Between now and the time the load avg is calculated 10 ticks later each running task sleeps and all the processors pass through the idle task at least once. The calc_load_tasks counter is decremented each time until it hits 0. Now the 10 ticks expire and we calculate the load avg, but the value of calc_load_tasks is 0.

This is most likely to occur in a situation where you have only a few tasks that run, but they sleep often. I'm going to build a new kernel with some extra print outs that will tell us what the calc_load_tasks value is between the 5 second mark and 10 ticks later.

Revision history for this message
Chase Douglas (chasedouglas) wrote :

@tgabi:

I've uploaded a new test kernel to http://people.canonical.com/~cndougla/513848/linux-image-2.6.31-21-server_2.6.31-21.58~printk2_amd64.deb. This kernel will print out each time the calc_load_tasks counter is changed between the time each cpu increments it to represent the running processes to the time the load avg is calculated. I'm guessing that on your server we will see the counter incremented a few times and then decremented back down to 0 by the time the load avg is calculated.

Please test this kernel by capturing the dmesg for a few minutes. We don't need the dstat output for this anymore, since we know the load avg is being calculated correctly from the calc_load_tasks value. Attach the dmesg here and we'll take a look.

Thanks

Revision history for this message
tgabi (tgabi) wrote :
Revision history for this message
Chase Douglas (chasedouglas) wrote :

@tgabi:

This latest log confirms my theory. There are only two situations where the calc_load_tasks counter is incremented: once when each process updates the counter every 5 seconds, and when a processor switches from an uninterruptible task to the idle task. The first increment is always matched with a decrement in the log, while the second only sometimes matches with a decrement. When there isn't a matching decrement due to uninterruptible tasks is when we see the load avg calculated with a non-zero calc_load_tasks.

This may be seen in Karmic but not Jaunty due to this commit: http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=dce48a84adf1806676319f6f480e30a6daa012f9. The commit reworks the load avg calculation code, and seems to introduce this regression.

Revision history for this message
Chase Douglas (chasedouglas) wrote :

@tgabi:

I've uploaded a new kernel to http://people.canonical.com/~cndougla/513848/linux-image-2.6.31-21-server_2.6.31-21.58~defer1_amd64.deb. This kernel defers any task accounting that occurs during the 10 tick load avg update window until the next accounting done by any cpu after the 10 tick window. Please test it to confirm that it updates the load avg correctly. If it doesn't please upload a new dstat and dmesg over a few minutes just as before.

Thanks

Revision history for this message
Chase Douglas (chasedouglas) wrote :

@tgabi:

I've done some testing of my own, and I don't think the ~defer1 kernel is working properly. Let me work some more on the fix.

Thanks

Revision history for this message
Chase Douglas (chasedouglas) wrote :

@tgabi:

Sorry for the false alarm. I had an incorrect patch laying around, but the correct patch should be in the kernel I listed above. I also tested the patch against a test program I have attached. Without the patch, the test program will ensure at least 90% processor usage, but I have confirmed that without any other running processes the load avg can be 0.00. With my patch, the load avg is accurate at 0.90 or above when the test program is running.

Revision history for this message
tgabi (tgabi) wrote :

Looks much better, although little bit weird in the way it rises/falls.

Revision history for this message
Chase Douglas (chasedouglas) wrote :

@tgabi:

Can you be more specific? If things are working correctly, then great. Otherwise, I'll need more information.

Revision history for this message
Peter Matulis (petermatulis) wrote :

Spoke to tgabi. The kernel is reporting fine.

Andy Whitcroft (apw)
Changed in linux (Ubuntu):
status: In Progress → Fix Committed
Changed in linux (Ubuntu Karmic):
status: New → In Progress
importance: Undecided → Low
assignee: nobody → Chase Douglas (chasedouglas)
Revision history for this message
Chase Douglas (chasedouglas) wrote :

Stable Release Update Justification:

Impact of bug: On low load systems with specific work load characteristics the load average may not be representative of the real load. Given a specific test case it is possible to load a system with NR_CPUS number of tasks and yet have a load avg of 0.00, though this is unlikely to occur in real work loads.

How addressed: The attached patch reworks the load accounting mechanism in the kernel scheduler. It ensures that the accounting is strictly dependent on the time (i.e. snapshot taken every 5 seconds) and the number of runnable and uninterruptible tasks at that given time. Previously, the accounting also depended on whether a cpu goes idle shortly after the 5 second snapshot.

Reproduction: See attached reproduction test case. Run it once on a non-loaded system (boot to rescue mode works well). Top will report the cpu usage at 90%, but uptime will report a load avg near 0.00 instead of at least 0.90 as expected.

Regression potential: The patch has been received well from senior Ubuntu kernel team members and some of the upstream kernel maintainers on lkml. For this reason it is assumed to be a good fix for this issue. The only code path touched by this patch involves the load avg accounting, so potential regressions could include incorrect load avg and/or some unforeseen general bug like a null dereference. However, the likelihood of either is minimal due to proper and thorough patch review.

Revision history for this message
Chase Douglas (chasedouglas) wrote :
Revision history for this message
Chase Douglas (chasedouglas) wrote :
Changed in linux (Ubuntu Karmic):
milestone: none → karmic-updates
Revision history for this message
Chase Douglas (chasedouglas) wrote :

I accidentally nominated this for jaunty by mistake. Please ignore.

Revision history for this message
Launchpad Janitor (janitor) wrote :
Download full text (10.2 KiB)

This bug was fixed in the package linux - 2.6.32-20.29

---------------
linux (2.6.32-20.29) lucid; urgency=low

  [ Andy Whitcroft ]

  * Revert "SAUCE: Use MODULE_IMPORT macro to tie intel_agp to i915"
    - LP: #542251
  * add Breaks: against hardy lvm2
    - LP: #528155

  [ Colin Watson ]

  * d-i -- enable udebs for generic-pae
    - LP: #160366

  [ Stefan Bader ]

  * [Config] Add xen netboot support
    - LP: #160366

  [ Takashi Iwai ]

  * (pre-stable): input: Support Clickpad devices in ClickZone mode
    - LP: #516329

  [ Upstream Kernel Changes ]

  * Revert "(pre-stable) Bluetooth: Fix sleeping function in RFCOMM within
    invalid context"
    - LP: #553837
  * Revert "(pre-stable) USB: fix usbfs regression"
    - LP: #553837
  * Revert "(pre-stable) softlockup: Stop spurious softlockup messages due
    to overflow"
    - LP: #553837
  * Revert "(pre-stable) drm/nouveau: report unknown connector state if lid
    closed"
    - LP: #553837
  * drivers/scsi/ses.c: eliminate double free
    - LP: #553837
  * decompress: fix new decompressor for PIC
    - LP: #553837
  * ARM: Fix decompressor's kernel size estimation for ROM=y
    - LP: #553837
  * MIPS: Cleanup forgotten label_module_alloc in tlbex.c
    - LP: #553837
  * tg3: Fix tg3_poll_controller() passing wrong pointer to tg3_interrupt()
    - LP: #553837
  * tg3: Fix 5906 transmit hangs
    - LP: #553837
  * ALSA: hda - Fix input source elements of secondary ADCs on Realtek
    - LP: #553837
  * ALSA: hda: enable MSI for Gateway M-6866
    - LP: #538918, #553837
  * timekeeping: Prevent oops when GENERIC_TIME=n
    - LP: #553837
  * Input: alps - add support for the touchpad on Toshiba Tecra A11-11L
    - LP: #553837
  * Input: i8042 - add ALDI/MEDION netbook E1222 to qurik reset table
    - LP: #553837
  * i2c-i801: Don't use the block buffer for I2C block writes
    - LP: #553837
  * ath5k: dont use external sleep clock in AP mode
    - LP: #553837
  * ath5k: fix setup for CAB queue
    - LP: #553837
  * ring-buffer: Move disabled check into preempt disable section
    - LP: #553837
  * function-graph: Init curr_ret_stack with ret_stack
    - LP: #553837
  * Bluetooth: Fix sleeping function in RFCOMM within invalid context
    - LP: #553837
  * tracing: Use same local variable when resetting the ring buffer
    - LP: #553837
  * tracing: Disable buffer switching when starting or stopping trace
    - LP: #553837
  * tracing: Do not record user stack trace from NMI context
    - LP: #553837
  * PCI: unconditionally clear AER uncorr status register during cleanup
    - LP: #553837
  * efifb: fix framebuffer handoff
    - LP: #553837
  * coredump: suppress uid comparison test if core output files are pipes
    - LP: #553837
  * V4L/DVB (13961): em28xx-dvb: fix memleak in dvb_fini()
    - LP: #553837
  * hrtimer: Tune hrtimer_interrupt hang logic
    - LP: #553837
  * x86, apic: Don't use logical-flat mode when CPU hotplug may exceed 8
    CPUs
    - LP: #553837
  * mvsas: add support for Adaptec ASC-1045/1405 SAS/SATA HBA
    - LP: #553837
  * pci: add support for 82576NS serdes to existing SR-IOV quirk
    - LP: #553837
  * sched: Mark boot-cpu active before smp_init()
    -...

Changed in linux (Ubuntu):
status: Fix Committed → Fix Released
Changed in linux (Ubuntu Karmic):
status: In Progress → Fix Committed
Revision history for this message
Jonathan Riddell (jr) wrote :

Currently in karmic-proposed, awaiting approval from ubuntu-sru

Revision history for this message
Jonathan Riddell (jr) wrote :

Currently in karmic-proposed unapproved queue, awaiting approval from ubuntu-sru

Revision history for this message
Martin Pitt (pitti) wrote : Please test proposed package

Accepted linux into karmic-proposed, the package will build now and be available in a few hours. Please test and give feedback here. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Thank you in advance!

tags: added: verification-needed
Revision history for this message
Martin Pitt (pitti) wrote :

Accepted linux-ec2 into karmic-proposed, the package will build now and be available in a few hours. Please test and give feedback here. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Thank you in advance!

Revision history for this message
Martin Pitt (pitti) wrote :

Accepted linux-mvl-dove into karmic-proposed, the package will build now and be available in a few hours. Please test and give feedback here. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Thank you in advance!

Revision history for this message
Martin Pitt (pitti) wrote :

Accepted linux into karmic-proposed, the package will build now and be available in a few hours. Please test and give feedback here. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Thank you in advance!

Revision history for this message
Peter Matulis (petermatulis) wrote :

Installed. Looks normal. I didn't test any specific load however.

$ apt-cache policy linux-image-virtual

linux-image-virtual:
  Installed: 2.6.31.22.35
  Candidate: 2.6.31.22.35
  Version table:
 *** 2.6.31.22.35 0
        500 http://archive.ubuntu.com karmic-updates/main Packages
        500 http://security.ubuntu.com karmic-security/main Packages

I noticed that this change is actually from the 2.6.31-22.66 sources.

Revision history for this message
Jean-Baptiste Lallement (jibel) wrote :

@Peter, thanks for testing that's much appreciated. Did you test the package in -proposed ?
What's the output of apt-cache policy linux-image-2.6.31-22-generic ?

Could you try to reproduce the load you initially reported and tell us if the kernel in -proposed fixes it

Thanks in advance for your help.

Revision history for this message
Peter Matulis (petermatulis) wrote :

$ apt-cache policy linux-image-generic

linux-image-generic:
  Installed: (none)
  Candidate: 2.6.31.22.35
  Version table:
     2.6.31.22.35 0
        500 http://archive.ubuntu.com karmic-updates/main Packages
        500 http://security.ubuntu.com karmic-security/main Packages

A customer was producing the load for me. I don't have the ability to do reproduce it accurately myself.

Yes, I have -proposed enabled. But my understanding is that this -proposed fix is actually found in the -security upload. Only the sources for the -proposed package is available.

Revision history for this message
Jean-Baptiste Lallement (jibel) wrote :

SRU verification for Karmic:
I have reproduced the problem with linux 2.6.31-22.65 in karmic-updates and have verified that the version of linux 2.6.31-22.66 in -proposed fixes the issue. I've been using Chase's testcase and the load average is reported correctly.

Marking as verification-done

@Peter, the version in -proposed is usually higher than the version in -security. That's why I asked you the exact version of the package you were testing.

Thanks all for reporting and testing.

tags: added: verification-done
removed: verification-needed
Revision history for this message
Martin Pitt (pitti) wrote :

Accepted linux into karmic-proposed, the package will build now and be available in a few hours. Please test and give feedback here. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Thank you in advance!

tags: removed: verification-done
tags: added: verification-needed
Steve Conklin (sconklin)
tags: added: verification-done
removed: verification-needed
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package linux - 2.6.31-22.68

---------------
linux (2.6.31-22.68) karmic-proposed; urgency=low

  [ Andy Whitcroft ]

  * SAUCE: docs -- fix doc strings for fc_event_seq

  [ Brad Figg ]

  * SAUCE: (no-up) Modularize vesafb -- fix initialization
    - LP: #611471

  [ Chase Douglas ]

  * SAUCE: sched: update load count only once per cpu in 10 tick update
    window
    - LP: #513848

  [ Ike Panhc ]

  * SAUCE: agp/intel: Add second set of PCI-IDs for B43
    - LP: #640214
  * SAUCE: drm/i915: Add second set of PCI-IDs for B43
    - LP: #640214

  [ Steve Conklin ]

  * SAUCE: Fix compile error on ia64, powerpc, and sparc

  [ Upstream Kernel Changes ]

  * (pre-stable) x86-32, resume: do a global tlb flush in S4 resume
    - LP: #531309
  * PCI: Ensure we re-enable devices on resume
    - LP: #566149
 -- Steve Conklin <email address hidden> Fri, 22 Oct 2010 09:05:13 -0500

Changed in linux (Ubuntu Karmic):
status: Fix Committed → Fix Released
Revision history for this message
bart bobrowski (mrbart) wrote :

i am seeing the same problem on 11.10

please see here my description http://ubuntuforums.org/showthread.php?p=11612057

To post a comment you must log in.