4.8 regression: SLAB is being used instead of SLUB

Bug #1626564 reported by Colin Ian King
28
This bug affects 4 people
Affects Status Importance Assigned to Milestone
linux (Ubuntu)
Fix Released
High
Tim Gardner
Yakkety
Fix Released
High
Tim Gardner

Bug Description

We're seeing hundreds of kernel worker threads being spawned with some actions, for example, after booting the desktop and hutting the brightness keys causes this. On investigation, this occurs when CONFIG_SLAB is being used.

1. Ubuntu traditionally uses CONFIG_SLUB, so we should use that instead of CONFIG_SLAB (why was it changed for Yakkety?)
2. With CONFIG_SLUB I cannot reproduce the issue of the hundreds for worker threads
3 CONFIG_SLUB seems more performant on the boot too over SLAB.

Please re-enable the CONFIG_SLUB allocator as per the 4.4. Xenial configs.

Tags: kernel-4.8
Revision history for this message
Colin Ian King (colin-king) wrote :

Oh, and for the interested, the many worker thread bug in SLAB occurred between 4.6 and 4.7. I'm not bisecting that one for now as I don't want to use SLAB :-)

tags: added: kernel-4.8
Changed in linux (Ubuntu):
importance: Undecided → High
Tim Gardner (timg-tpi)
Changed in linux (Ubuntu Yakkety):
assignee: nobody → Tim Gardner (timg-tpi)
status: New → In Progress
Tim Gardner (timg-tpi)
Changed in linux (Ubuntu Yakkety):
status: In Progress → Fix Committed
Revision history for this message
Doug Smythies (dsmythies) wrote :

This issue was introduced with the massive kernel configurations changes between mainline kernels 4.7-rc4 and 4.7-rc5. While I have been working on it for a couple of weeks, I was never able to isolate the exact kernel configuration change cause. I am compiling a kernel now (4.8-rc7) reverting this change to test. This attachment has some tools I made to very very simply show the issue.

If I look at linux/Documentation/workqueue.txt and do:

echo workqueue:workqueue_queue_work > /sys/kernel/debug/tracing/set_event

and:

cat /sys/kernel/debug/tracing/trace_pipe > out.txt

with the issue, I get somwhere between 10,000 and 20,000 occurances of memcg_kmem_cache_create_func in the file (using my simple test method). Without the issue, I get 21, and an overall file size about 50 times smaller, for otherwise similar conditions. i.e. with the issue this stuff seems to go nuts.

Revision history for this message
Doug Smythies (dsmythies) wrote :

I confirm that reverting the SLAB / SLUB changes in the ubuntu kernel configuration file for mainline kernel 4.8-rc7 to the state they were in for mainline kernel 4.7-rc4 fixes the issue (note, I disable debug, because otherwise it takes over twice as long to compile the kernel, and it is enormous):

$ scripts/diffconfig .config .config-4.8.0-040800rc7-generic
-HAVE_ALIGNED_STRUCT_PAGE y
-SLUB_CPU_PARTIAL y
-SLUB_DEBUG y
-SLUB_DEBUG_ON n
-SLUB_STATS n
 DEBUG_INFO n -> y
 SLAB n -> y
 SLAB_FREELIST_RANDOM n -> y
 SLUB y -> n
+DEBUG_INFO_DWARF4 n
+DEBUG_INFO_REDUCED n
+DEBUG_INFO_SPLIT n
+DEBUG_SLAB n
+GDB_SCRIPTS n

Revision history for this message
Colin Ian King (colin-king) wrote :

..plus SLUB improves boot speed - I'm seeing ~1 second shaved off an 8.5 second boot.

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package linux - 4.8.0-16.17

---------------
linux (4.8.0-16.17) yakkety; urgency=low

  [ Tim Gardner ]

  * Release Tracking Bug
    - LP: #1626768

  * Support ARM GIC ITS in ACPI mode (LP: #1626631)
    - [Config] CONFIG_ACPI_IORT=y
    - SAUCE: ACPI: I/O Remapping Table (IORT) initial support
    - SAUCE: ACPI: Add new IORT functions to support MSI domain handling
    - SAUCE: irqchip/gicv3-its: Cleanup for ITS domain initialization
    - SAUCE: irqchip/gicv3-its: Refactor ITS DT init code to prepare for ACPI
    - SAUCE: irqchip/gicv3-its: Probe ITS in the ACPI way
    - SAUCE: irqchip/gicv3-its: Factor out PCI-MSI part that might be reused for ACPI
    - SAUCE: irqchip/gicv3-its: Use MADT ITS subtable to do PCI/MSI domain initialization
    - SAUCE: PCI/MSI: Setup MSI domain on a per-device basis using IORT ACPI table

  * 4.8 dropped CONFIG_ATA=y (breaks systemd's TEST-08-ISSUE-2730 upstream test)
    (LP: #1626394)
    - [Config] CONFIG_ATA=y

  * Yakkety: Enable drivers with respect to Xenial (LP: #1626543)
    - [Config] CONFIG_VMD=m
    - [Config] CONFIG_MAC80211_RC_MINSTREL_VHT=y for all arches
    - [Config] CONFIG_OF=y for all arches
    - [Config] CONFIG_BLK_DEV_NVME_SCSI=y
    - [Config] Xenial device settings sync with amd64
    - [Config] Xenial device settings sync with i386
    - [Config] CONFIG_MTD_UBI_GLUEBI=m
    - [Config] Xenial device settings sync with armhf
    - [Config] Xenial device settings sync with arm64

  * yakkety 4.8, missing config CONFIG_USERFAULTFD=y (LP: #1626149)
    - [Config] CONFIG_USERFAULTFD=y

  * 4.8 regression: SLAB is being used instead of SLUB (LP: #1626564)
    - [Config] CONFIG_SLUB=y

  * image won't boot after upgrading to yakkety's 4.8 kernel because efi
    (LP: #1626158)
    - add nls_cp437 to the generic.inclusion-list

 -- Tim Gardner <email address hidden> Thu, 22 Sep 2016 06:51:45 -0600

Changed in linux (Ubuntu Yakkety):
status: Fix Committed → Fix Released
Revision history for this message
Doug Smythies (dsmythies) wrote :

From my experiments, it seems the issue is not 100% solved, but is much much much less probable.
On my Yakkety test Laptop, if I go back to kernel 4.4.0-9136-generic I can not create the high number of kworker threads. If I boot with kernel 4.8.0-16-generic, I can create the high number of kworker threads, however it is rather difficult.

Revision history for this message
Doug Smythies (dsmythies) wrote :

Note: For kernel work, I only use and compile kernels from the main kernel.org git branch. I steal the Ubuntu kernel configuration.

The issue of a high number of kworker threads does not exist in kernel 4.6, but does in 4.7-rc1.
When using "SLUB" it is a little difficult to detect on my Yakkety test Laptop, however it is fairly easy to detect on my 16.04 server. When using "SLAB" it is trivial to detect on either computer.

A bisection is going to take:

"Bisecting: 5788 revisions left to test after this (roughly 13 steps)"

Which I can only get to on about Tuesday or Wednesday.

Revision history for this message
Doug Smythies (dsmythies) wrote :

In case anyone is interested:
I got this far with the kernel bisection, but can only continue on Tuesday, maybe Monday night.

Revision history for this message
Martin Pitt (pitti) wrote :

> it seems the issue is not 100% solved, but is much much much less probable.

I confirm this in bug 1626436 -- boot time is a bit faster and load now "only" ~ 35 instead of ~ 250, but it's still a huge regression compared to 4.4.

Revision history for this message
Doug Smythies (dsmythies) wrote :

I was working on the assumption that the same commit was causing both the SLAB and the remaining SLUB issues, however it turns out that assumption was incorrect.

The commit that causes the SLAB issue is:
801faf0db8947e01877920e848a4d338dd7a99e7
"mm/slab: lockless decision to grow cache"

The commit that causes the remaining SLUB issue is:
81ae6d03952c1bfb96e1a716809bd65e7cd14360
"mm/slub.c: replace kick_all_cpus_sync() with synchronize_sched() in kmem_cache_shrink()"

It turns out that they just so happen to be adjacent commits in the tree.
I will attempt to make e-mails for upstream, but it'll take me awhile. (Anyone willing to review before I send them?)

Revision history for this message
Doug Smythies (dsmythies) wrote :

I filed two bug reports upstream:

https://bugzilla.kernel.org/show_bug.cgi?id=172991
https://bugzilla.kernel.org/show_bug.cgi?id=172981

I also made a kernel 4.8-rc8 with 81ae6d03 reverted (because it is so trivial to revert) and no matter how hard I beat on it I can not get it to mess up.

Revision history for this message
Tim Gardner (timg-tpi) wrote :

Doug - you should start a new bug to address your issue as this one will auto-close as soon as the next kernel is uploaded.

Revision history for this message
Doug Smythies (dsmythies) wrote :

> Doug - you should start a new bug to address your issue as this
> one will auto-close as soon as the next kernel is uploaded.

I do not understand. This one is not solved, as there two levels of contributing factors. This one should be set back to a status of "In Progress". Once both SLAB and SLUB are fixed, then the decision to use SLAB or SLUB should be re-evaluated.

One of Colin's original questions: "why was it [CONFIG_SLAB] changed for Yakkety?" also still needs an answer.

Revision history for this message
Tim Gardner (timg-tpi) wrote :

OK, your point is valid. I was just looking at the bug title. As for why SLAB v.s. SLUB happened, it was a regression introduced when I adopted more modular config settings from Debian.

Revision history for this message
Sarah Newman (srn-f) wrote :

Are you sure SLAB vs. SLUB fixed this?

I have images built from October 13 and today (October 22) with 4.8.0-22-generic and 4.8.0-26-generic respectively. On a 4.8.0-22-generic boot there are 37 kworker threads, on 4.8.0-26-generic there are 524 kworker threads. It could be that with enough reboots the older version would spawn as many threads, I'm not sure.

Revision history for this message
Sarah Newman (srn-f) wrote :

Also: the 524 threads was with Xen PVM and two VCPUS. With one VCPU the problem goes away. The run on 4.8.0-22-generic also had two VCPUs.

There is no problem with Xen HVM and 4.8.0-26-generic with either one or two VCPUS.

Revision history for this message
Doug Smythies (dsmythies) wrote :

> Are you sure SLAB vs. SLUB fixed this?

No, it does not fix the issue. However, the issue is more difficult to re-produce. It seems easy enough to reproduce on my test server, but seems to not happen on my test LapTop.

There are 3 related upstream commits that fix the issue (at least in my testing) for both SLAB and SLUB, and I think (but am not sure) they have been flagged to eventually be backported to stable. I do not think any of the patches have made it into the current release candidate kernel (currently 4.9-rc2) yet.

References:
https://patchwork.kernel.org/patch/9359269/
https://patchwork.kernel.org/patch/9359271/
https://patchwork.kernel.org/patch/9363679/

Revision history for this message
Doug Smythies (dsmythies) wrote :

One of the 3 patches was included in the mainline kernel sometime ago. It fixed SLAB, but not SLUB. The other two patches are now in the mainline kernel and will appear in kernel 4.10-RC1. I'll re-test then.

Revision history for this message
Colin Ian King (colin-king) wrote :

Hi Doug, I believe I've now addressed this bug, see: bug 1649905

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.