2020-04-15 20:41:52 |
Mauricio Faria de Oliveira |
bug |
|
|
added bug |
2020-04-15 20:45:09 |
Mauricio Faria de Oliveira |
attachment added |
|
aufs-do-not-call-i_readcount_inc.patch https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1873074/+attachment/5354859/+files/aufs-do-not-call-i_readcount_inc.patch |
|
2020-04-15 20:49:03 |
Mauricio Faria de Oliveira |
attachment added |
|
kmod-kprobe-fput.c https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1873074/+attachment/5354860/+files/kmod-kprobe-fput.c |
|
2020-04-15 20:50:34 |
Mauricio Faria de Oliveira |
attachment added |
|
aufs-intro-i_readcount_inc https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1873074/+attachment/5354878/+files/aufs-intro-i_readcount_inc |
|
2020-04-15 20:51:25 |
Mauricio Faria de Oliveira |
attachment added |
|
xfstests_aufs.patch https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1873074/+attachment/5354879/+files/xfstests_aufs.patch |
|
2020-04-16 14:33:39 |
Mauricio Faria de Oliveira |
tags |
|
sts |
|
2020-04-22 15:32:16 |
Mike Salvatore |
bug |
|
|
added subscriber Thadeu Lima de Souza Cascardo |
2020-05-27 16:24:01 |
Mauricio Faria de Oliveira |
bug |
|
|
added subscriber Jay Vosburgh |
2020-06-17 10:59:51 |
Mauricio Faria de Oliveira |
bug |
|
|
added subscriber J. R. Okajima |
2020-06-17 11:07:12 |
Mauricio Faria de Oliveira |
bug |
|
|
added subscriber Salvatore Bonaccorso |
2020-06-17 13:21:25 |
J. R. Okajima |
attachment added |
|
a.patch https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1873074/+attachment/5384729/+files/a.patch |
|
2020-06-17 13:48:24 |
Mauricio Faria de Oliveira |
bug |
|
|
added subscriber Ben Hutchings |
2020-06-24 03:00:23 |
Seth Arnold |
bug |
|
|
added subscriber Terry Rudd |
2020-06-24 03:01:31 |
Seth Arnold |
bug |
|
|
added subscriber Brad Figg |
2020-06-24 03:01:58 |
Seth Arnold |
bug |
|
|
added subscriber Andy Whitcroft |
2020-06-25 22:10:36 |
Seth Arnold |
cve linked |
|
2020-11935 |
|
2020-06-29 12:02:39 |
Mauricio Faria de Oliveira |
linux (Ubuntu): status |
New |
In Progress |
|
2020-06-29 12:02:58 |
Mauricio Faria de Oliveira |
nominated for series |
|
Ubuntu Groovy |
|
2020-06-29 12:02:58 |
Mauricio Faria de Oliveira |
bug task added |
|
linux (Ubuntu Groovy) |
|
2020-06-29 12:02:58 |
Mauricio Faria de Oliveira |
nominated for series |
|
Ubuntu Bionic |
|
2020-06-29 12:02:58 |
Mauricio Faria de Oliveira |
bug task added |
|
linux (Ubuntu Bionic) |
|
2020-06-29 12:02:58 |
Mauricio Faria de Oliveira |
nominated for series |
|
Ubuntu Eoan |
|
2020-06-29 12:02:58 |
Mauricio Faria de Oliveira |
bug task added |
|
linux (Ubuntu Eoan) |
|
2020-06-29 12:02:58 |
Mauricio Faria de Oliveira |
nominated for series |
|
Ubuntu Focal |
|
2020-06-29 12:02:58 |
Mauricio Faria de Oliveira |
bug task added |
|
linux (Ubuntu Focal) |
|
2020-06-29 12:03:07 |
Mauricio Faria de Oliveira |
linux (Ubuntu Bionic): status |
New |
In Progress |
|
2020-06-29 12:03:10 |
Mauricio Faria de Oliveira |
linux (Ubuntu Bionic): importance |
Undecided |
Medium |
|
2020-06-29 12:03:13 |
Mauricio Faria de Oliveira |
linux (Ubuntu Bionic): assignee |
|
Mauricio Faria de Oliveira (mfo) |
|
2020-06-29 12:03:18 |
Mauricio Faria de Oliveira |
linux (Ubuntu Eoan): status |
New |
Won't Fix |
|
2020-06-29 12:03:22 |
Mauricio Faria de Oliveira |
linux (Ubuntu Eoan): importance |
Undecided |
Medium |
|
2020-06-29 12:03:24 |
Mauricio Faria de Oliveira |
linux (Ubuntu Eoan): assignee |
|
Mauricio Faria de Oliveira (mfo) |
|
2020-06-29 12:03:29 |
Mauricio Faria de Oliveira |
linux (Ubuntu Focal): status |
New |
In Progress |
|
2020-06-29 12:03:32 |
Mauricio Faria de Oliveira |
linux (Ubuntu Focal): importance |
Undecided |
Medium |
|
2020-06-29 12:03:35 |
Mauricio Faria de Oliveira |
linux (Ubuntu Focal): assignee |
|
Mauricio Faria de Oliveira (mfo) |
|
2020-06-29 12:04:05 |
Mauricio Faria de Oliveira |
linux (Ubuntu Groovy): status |
In Progress |
Won't Fix |
|
2020-06-29 12:04:09 |
Mauricio Faria de Oliveira |
linux (Ubuntu Groovy): importance |
Undecided |
Medium |
|
2020-06-29 12:04:12 |
Mauricio Faria de Oliveira |
linux (Ubuntu Groovy): assignee |
|
Mauricio Faria de Oliveira (mfo) |
|
2020-06-29 12:04:27 |
Mauricio Faria de Oliveira |
linux (Ubuntu): importance |
Undecided |
Medium |
|
2020-06-29 12:04:30 |
Mauricio Faria de Oliveira |
linux (Ubuntu): assignee |
|
Mauricio Faria de Oliveira (mfo) |
|
2020-06-29 13:12:10 |
Mauricio Faria de Oliveira |
description |
Problem Report:
--------------
An user reported several nodes in their Kubernetes clusters
hit a kernel panic at about the same time, and periodically
(usually 35 days of uptime, and in same order nodes booted.)
The kernel panics message/stack trace are consistent across
nodes, in __fput() by iptables-save/restore from kube-proxy.
Example:
"""
[3016161.866702] kernel BUG at .../include/linux/fs.h:2583!
[3016161.866704] invalid opcode: 0000 [#1] SMP
...
[3016161.866780] CPU: 40 PID: 33068 Comm: iptables-restor Tainted: P OE 4.4.0-133-generic #159-Ubuntu
...
[3016161.866786] RIP: 0010:[...] [...] __fput+0x223/0x230
...
[3016161.866818] Call Trace:
[3016161.866823] [...] ____fput+0xe/0x10
[3016161.866827] [...] task_work_run+0x86/0xb0
[3016161.866831] [...] exit_to_usermode_loop+0xc2/0xd0
[3016161.866833] [...] syscall_return_slowpath+0x4e/0x60
[3016161.866839] [...] int_ret_from_sys_call+0x25/0x9f
"""
(uptime: 3016161 seconds / (24*60*60) = 34.90 days)
They have provided a crashdump (privately available) used
for analysis later in this bug report.
Note: the root cause turns out to be independent of K8s,
as explained in the Root Cause section.
Related Report:
--------------
This behavior matches this public bug of another user:
https://github.com/kubernetes/kubernetes/issues/70229
"""
I have several machines happen kernel panic,and these
machine have same dump trace like below:
KERNEL: /usr/lib/debug/boot/vmlinux-4.4.0-104-generic
...
PANIC: "kernel BUG at .../include/linux/fs.h:2582!"
...
COMMAND: "iptables-restor"
...
crash> bt
...
[exception RIP: __fput+541]
...
#8 [ffff880199f33e60] __fput at ffffffff812125ac
#9 [ffff880199f33ea8] ____fput at ffffffff812126ee
#10 [ffff880199f33eb8] task_work_run at ffffffff8109f101
#11 [ffff880199f33ef8] exit_to_usermode_loop at ffffffff81003242
#12 [ffff880199f33f30] syscall_return_slowpath at ffffffff81003c6e
#13 [ffff880199f33f50] int_ret_from_sys_call at ffffffff818449d0
...
The above showed command "iptables-restor" cause the kernel
panic and its pid is 16884,its parent process is kube-proxy.
Sometimes the process of kernel panic is "iptables-save" and
the dump trace are same.
The kernel panic always happens every 26 days(machine uptime)
"""
<< Adding further sections as comments to keep page short. >> |
[Impact]
* Systems with aufs mounts are vulnerable to a kernel BUG(),
which can turn into a panic/crash if panic_on_oops is set.
* It is exploitable by unprivileged local users; and also
remote access operations (e.g., web server) potentially.
* This issue has also manifested in Kubernetes deployments
with a kernel panic in iptables-save or iptables-restore
after a few weeks of uptime, without user interaction.
* Usually all Kubernetes worker nodes hit the issue around
the same time.
[Fix]
* The issue is fixed with 2 patches in aufs5-standalone.git:
- 72a59459eba5 aufs: do not call i_readcount_inc()
- d8e4eb086cb4 aufs: bugfix, IMA i_readcount
* The first addresses the issue, and the second addresses a
regression in the aufs feature to change RW branches to RO.
* The kernel v5.3 aufs patches had an equivalent fix to the
second patch, which is present in the Focal aufs patchset
(and on ubuntu-unstable/master & /master-5.8 on 20200629)
- 1d26f910c53f aufs: for v5.3-rc1, maintain i_readcount
[Test Case]
* Repeatedly open/close the same file in read-only mode in
aufs (UINT_MAX times, to overflow a signed int back to 0.)
* Alternatively, monitor the underlying filesystems's file
inode.i_readcount over several open/close system calls.
(should not monotonically increase; rather, return to 0.)
[Regression Potential]
* This changes the core path that aufs opens files, so there
is a risk of regression; however, the fix changes aufs for
how other filesystems work, so this generally is OK to do.
In any case, most regressions would manifest in open() or
close() (where the VFS handles/checks inode.i_readcount.)
* The aufs maintainer has access to an internal test-suite
used to validate aufs changes, used to identify the first
regression (in the branch RW/RO mode change), and then to
validate/publish the patches upstream; should be good now.
[Other Info]
* Applied on Unstable (branches master and master-5.8)
* Not required on Groovy (still 5.4; should sync from Unstable)
* Not required on Eoan (EOL date before SRU cycle release date)
* Required on Bionic and Focal and Xenial.
[Original Bug Description]
Problem Report:
--------------
An user reported several nodes in their Kubernetes clusters
hit a kernel panic at about the same time, and periodically
(usually 35 days of uptime, and in same order nodes booted.)
The kernel panics message/stack trace are consistent across
nodes, in __fput() by iptables-save/restore from kube-proxy.
Example:
"""
[3016161.866702] kernel BUG at .../include/linux/fs.h:2583!
[3016161.866704] invalid opcode: 0000 [#1] SMP
...
[3016161.866780] CPU: 40 PID: 33068 Comm: iptables-restor Tainted: P OE 4.4.0-133-generic #159-Ubuntu
...
[3016161.866786] RIP: 0010:[...] [...] __fput+0x223/0x230
...
[3016161.866818] Call Trace:
[3016161.866823] [...] ____fput+0xe/0x10
[3016161.866827] [...] task_work_run+0x86/0xb0
[3016161.866831] [...] exit_to_usermode_loop+0xc2/0xd0
[3016161.866833] [...] syscall_return_slowpath+0x4e/0x60
[3016161.866839] [...] int_ret_from_sys_call+0x25/0x9f
"""
(uptime: 3016161 seconds / (24*60*60) = 34.90 days)
They have provided a crashdump (privately available) used
for analysis later in this bug report.
Note: the root cause turns out to be independent of K8s,
as explained in the Root Cause section.
Related Report:
--------------
This behavior matches this public bug of another user:
https://github.com/kubernetes/kubernetes/issues/70229
"""
I have several machines happen kernel panic,and these
machine have same dump trace like below:
KERNEL: /usr/lib/debug/boot/vmlinux-4.4.0-104-generic
...
PANIC: "kernel BUG at .../include/linux/fs.h:2582!"
...
COMMAND: "iptables-restor"
...
crash> bt
...
[exception RIP: __fput+541]
...
#8 [ffff880199f33e60] __fput at ffffffff812125ac
#9 [ffff880199f33ea8] ____fput at ffffffff812126ee
#10 [ffff880199f33eb8] task_work_run at ffffffff8109f101
#11 [ffff880199f33ef8] exit_to_usermode_loop at ffffffff81003242
#12 [ffff880199f33f30] syscall_return_slowpath at ffffffff81003c6e
#13 [ffff880199f33f50] int_ret_from_sys_call at ffffffff818449d0
...
The above showed command "iptables-restor" cause the kernel
panic and its pid is 16884,its parent process is kube-proxy.
Sometimes the process of kernel panic is "iptables-save" and
the dump trace are same.
The kernel panic always happens every 26 days(machine uptime)
"""
<< Adding further sections as comments to keep page short. >> |
|
2020-06-29 13:16:04 |
Mauricio Faria de Oliveira |
description |
[Impact]
* Systems with aufs mounts are vulnerable to a kernel BUG(),
which can turn into a panic/crash if panic_on_oops is set.
* It is exploitable by unprivileged local users; and also
remote access operations (e.g., web server) potentially.
* This issue has also manifested in Kubernetes deployments
with a kernel panic in iptables-save or iptables-restore
after a few weeks of uptime, without user interaction.
* Usually all Kubernetes worker nodes hit the issue around
the same time.
[Fix]
* The issue is fixed with 2 patches in aufs5-standalone.git:
- 72a59459eba5 aufs: do not call i_readcount_inc()
- d8e4eb086cb4 aufs: bugfix, IMA i_readcount
* The first addresses the issue, and the second addresses a
regression in the aufs feature to change RW branches to RO.
* The kernel v5.3 aufs patches had an equivalent fix to the
second patch, which is present in the Focal aufs patchset
(and on ubuntu-unstable/master & /master-5.8 on 20200629)
- 1d26f910c53f aufs: for v5.3-rc1, maintain i_readcount
[Test Case]
* Repeatedly open/close the same file in read-only mode in
aufs (UINT_MAX times, to overflow a signed int back to 0.)
* Alternatively, monitor the underlying filesystems's file
inode.i_readcount over several open/close system calls.
(should not monotonically increase; rather, return to 0.)
[Regression Potential]
* This changes the core path that aufs opens files, so there
is a risk of regression; however, the fix changes aufs for
how other filesystems work, so this generally is OK to do.
In any case, most regressions would manifest in open() or
close() (where the VFS handles/checks inode.i_readcount.)
* The aufs maintainer has access to an internal test-suite
used to validate aufs changes, used to identify the first
regression (in the branch RW/RO mode change), and then to
validate/publish the patches upstream; should be good now.
[Other Info]
* Applied on Unstable (branches master and master-5.8)
* Not required on Groovy (still 5.4; should sync from Unstable)
* Not required on Eoan (EOL date before SRU cycle release date)
* Required on Bionic and Focal and Xenial.
[Original Bug Description]
Problem Report:
--------------
An user reported several nodes in their Kubernetes clusters
hit a kernel panic at about the same time, and periodically
(usually 35 days of uptime, and in same order nodes booted.)
The kernel panics message/stack trace are consistent across
nodes, in __fput() by iptables-save/restore from kube-proxy.
Example:
"""
[3016161.866702] kernel BUG at .../include/linux/fs.h:2583!
[3016161.866704] invalid opcode: 0000 [#1] SMP
...
[3016161.866780] CPU: 40 PID: 33068 Comm: iptables-restor Tainted: P OE 4.4.0-133-generic #159-Ubuntu
...
[3016161.866786] RIP: 0010:[...] [...] __fput+0x223/0x230
...
[3016161.866818] Call Trace:
[3016161.866823] [...] ____fput+0xe/0x10
[3016161.866827] [...] task_work_run+0x86/0xb0
[3016161.866831] [...] exit_to_usermode_loop+0xc2/0xd0
[3016161.866833] [...] syscall_return_slowpath+0x4e/0x60
[3016161.866839] [...] int_ret_from_sys_call+0x25/0x9f
"""
(uptime: 3016161 seconds / (24*60*60) = 34.90 days)
They have provided a crashdump (privately available) used
for analysis later in this bug report.
Note: the root cause turns out to be independent of K8s,
as explained in the Root Cause section.
Related Report:
--------------
This behavior matches this public bug of another user:
https://github.com/kubernetes/kubernetes/issues/70229
"""
I have several machines happen kernel panic,and these
machine have same dump trace like below:
KERNEL: /usr/lib/debug/boot/vmlinux-4.4.0-104-generic
...
PANIC: "kernel BUG at .../include/linux/fs.h:2582!"
...
COMMAND: "iptables-restor"
...
crash> bt
...
[exception RIP: __fput+541]
...
#8 [ffff880199f33e60] __fput at ffffffff812125ac
#9 [ffff880199f33ea8] ____fput at ffffffff812126ee
#10 [ffff880199f33eb8] task_work_run at ffffffff8109f101
#11 [ffff880199f33ef8] exit_to_usermode_loop at ffffffff81003242
#12 [ffff880199f33f30] syscall_return_slowpath at ffffffff81003c6e
#13 [ffff880199f33f50] int_ret_from_sys_call at ffffffff818449d0
...
The above showed command "iptables-restor" cause the kernel
panic and its pid is 16884,its parent process is kube-proxy.
Sometimes the process of kernel panic is "iptables-save" and
the dump trace are same.
The kernel panic always happens every 26 days(machine uptime)
"""
<< Adding further sections as comments to keep page short. >> |
[Impact]
* Systems with aufs mounts are vulnerable to a kernel BUG(),
which can turn into a panic/crash if panic_on_oops is set.
* It is exploitable by unprivileged local users; and also
remote access operations (e.g., web server) potentially.
* This issue has also manifested in Kubernetes deployments
with a kernel panic in iptables-save or iptables-restore
after a few weeks of uptime, without user interaction.
* Usually all Kubernetes worker nodes hit the issue around
the same time.
[Fix]
* The issue is fixed with 2 patches in aufs5-standalone.git:
- 72a59459eba5 aufs: do not call i_readcount_inc()
- d8e4eb086cb4 aufs: bugfix, IMA i_readcount
* The first addresses the issue, and the second addresses a
regression in the aufs feature to change RW branches to RO.
* The kernel v5.3 aufs patches had an equivalent fix to the
second patch, which is present in the Focal aufs patchset
(and on ubuntu-unstable/master & /master-5.8 on 20200629)
- 1d26f910c53f aufs: for v5.3-rc1, maintain i_readcount
[Test Case]
* Repeatedly open/close the same file in read-only mode in
aufs (UINT_MAX times, to overflow a signed int back to 0.)
* Alternatively, monitor the underlying filesystems's file
inode.i_readcount over several open/close system calls.
(should not monotonically increase; rather, return to 0.)
[Regression Potential]
* This changes the core path that aufs opens files, so there
is a risk of regression; however, the fix changes aufs for
how other filesystems work, so this generally is OK to do.
In any case, most regressions would manifest in open() or
close() (where the VFS handles/checks inode.i_readcount.)
* The aufs maintainer has access to an internal test-suite
used to validate aufs changes, used to identify the first
regression (in the branch RW/RO mode change), and then to
validate/publish the patches upstream; should be good now.
* This has also been tested with 'stress-ng --class filesystem'
and with 'xfstests -overlay' (patch to use aufs vs overlayfs)
on Xenial/Bionic/Focal (-proposed vs. -proposed + patches)
[Other Info]
* Applied on Unstable (branches master and master-5.8)
* Not required on Groovy (still 5.4; should sync from Unstable)
* Not required on Eoan (EOL date before SRU cycle release date)
* Required on Bionic and Focal and Xenial.
[Original Bug Description]
Problem Report:
--------------
An user reported several nodes in their Kubernetes clusters
hit a kernel panic at about the same time, and periodically
(usually 35 days of uptime, and in same order nodes booted.)
The kernel panics message/stack trace are consistent across
nodes, in __fput() by iptables-save/restore from kube-proxy.
Example:
"""
[3016161.866702] kernel BUG at .../include/linux/fs.h:2583!
[3016161.866704] invalid opcode: 0000 [#1] SMP
...
[3016161.866780] CPU: 40 PID: 33068 Comm: iptables-restor Tainted: P OE 4.4.0-133-generic #159-Ubuntu
...
[3016161.866786] RIP: 0010:[...] [...] __fput+0x223/0x230
...
[3016161.866818] Call Trace:
[3016161.866823] [...] ____fput+0xe/0x10
[3016161.866827] [...] task_work_run+0x86/0xb0
[3016161.866831] [...] exit_to_usermode_loop+0xc2/0xd0
[3016161.866833] [...] syscall_return_slowpath+0x4e/0x60
[3016161.866839] [...] int_ret_from_sys_call+0x25/0x9f
"""
(uptime: 3016161 seconds / (24*60*60) = 34.90 days)
They have provided a crashdump (privately available) used
for analysis later in this bug report.
Note: the root cause turns out to be independent of K8s,
as explained in the Root Cause section.
Related Report:
--------------
This behavior matches this public bug of another user:
https://github.com/kubernetes/kubernetes/issues/70229
"""
I have several machines happen kernel panic,and these
machine have same dump trace like below:
KERNEL: /usr/lib/debug/boot/vmlinux-4.4.0-104-generic
...
PANIC: "kernel BUG at .../include/linux/fs.h:2582!"
...
COMMAND: "iptables-restor"
...
crash> bt
...
[exception RIP: __fput+541]
...
#8 [ffff880199f33e60] __fput at ffffffff812125ac
#9 [ffff880199f33ea8] ____fput at ffffffff812126ee
#10 [ffff880199f33eb8] task_work_run at ffffffff8109f101
#11 [ffff880199f33ef8] exit_to_usermode_loop at ffffffff81003242
#12 [ffff880199f33f30] syscall_return_slowpath at ffffffff81003c6e
#13 [ffff880199f33f50] int_ret_from_sys_call at ffffffff818449d0
...
The above showed command "iptables-restor" cause the kernel
panic and its pid is 16884,its parent process is kube-proxy.
Sometimes the process of kernel panic is "iptables-save" and
the dump trace are same.
The kernel panic always happens every 26 days(machine uptime)
"""
<< Adding further sections as comments to keep page short. >> |
|
2020-06-29 13:16:55 |
Mauricio Faria de Oliveira |
description |
[Impact]
* Systems with aufs mounts are vulnerable to a kernel BUG(),
which can turn into a panic/crash if panic_on_oops is set.
* It is exploitable by unprivileged local users; and also
remote access operations (e.g., web server) potentially.
* This issue has also manifested in Kubernetes deployments
with a kernel panic in iptables-save or iptables-restore
after a few weeks of uptime, without user interaction.
* Usually all Kubernetes worker nodes hit the issue around
the same time.
[Fix]
* The issue is fixed with 2 patches in aufs5-standalone.git:
- 72a59459eba5 aufs: do not call i_readcount_inc()
- d8e4eb086cb4 aufs: bugfix, IMA i_readcount
* The first addresses the issue, and the second addresses a
regression in the aufs feature to change RW branches to RO.
* The kernel v5.3 aufs patches had an equivalent fix to the
second patch, which is present in the Focal aufs patchset
(and on ubuntu-unstable/master & /master-5.8 on 20200629)
- 1d26f910c53f aufs: for v5.3-rc1, maintain i_readcount
[Test Case]
* Repeatedly open/close the same file in read-only mode in
aufs (UINT_MAX times, to overflow a signed int back to 0.)
* Alternatively, monitor the underlying filesystems's file
inode.i_readcount over several open/close system calls.
(should not monotonically increase; rather, return to 0.)
[Regression Potential]
* This changes the core path that aufs opens files, so there
is a risk of regression; however, the fix changes aufs for
how other filesystems work, so this generally is OK to do.
In any case, most regressions would manifest in open() or
close() (where the VFS handles/checks inode.i_readcount.)
* The aufs maintainer has access to an internal test-suite
used to validate aufs changes, used to identify the first
regression (in the branch RW/RO mode change), and then to
validate/publish the patches upstream; should be good now.
* This has also been tested with 'stress-ng --class filesystem'
and with 'xfstests -overlay' (patch to use aufs vs overlayfs)
on Xenial/Bionic/Focal (-proposed vs. -proposed + patches)
[Other Info]
* Applied on Unstable (branches master and master-5.8)
* Not required on Groovy (still 5.4; should sync from Unstable)
* Not required on Eoan (EOL date before SRU cycle release date)
* Required on Bionic and Focal and Xenial.
[Original Bug Description]
Problem Report:
--------------
An user reported several nodes in their Kubernetes clusters
hit a kernel panic at about the same time, and periodically
(usually 35 days of uptime, and in same order nodes booted.)
The kernel panics message/stack trace are consistent across
nodes, in __fput() by iptables-save/restore from kube-proxy.
Example:
"""
[3016161.866702] kernel BUG at .../include/linux/fs.h:2583!
[3016161.866704] invalid opcode: 0000 [#1] SMP
...
[3016161.866780] CPU: 40 PID: 33068 Comm: iptables-restor Tainted: P OE 4.4.0-133-generic #159-Ubuntu
...
[3016161.866786] RIP: 0010:[...] [...] __fput+0x223/0x230
...
[3016161.866818] Call Trace:
[3016161.866823] [...] ____fput+0xe/0x10
[3016161.866827] [...] task_work_run+0x86/0xb0
[3016161.866831] [...] exit_to_usermode_loop+0xc2/0xd0
[3016161.866833] [...] syscall_return_slowpath+0x4e/0x60
[3016161.866839] [...] int_ret_from_sys_call+0x25/0x9f
"""
(uptime: 3016161 seconds / (24*60*60) = 34.90 days)
They have provided a crashdump (privately available) used
for analysis later in this bug report.
Note: the root cause turns out to be independent of K8s,
as explained in the Root Cause section.
Related Report:
--------------
This behavior matches this public bug of another user:
https://github.com/kubernetes/kubernetes/issues/70229
"""
I have several machines happen kernel panic,and these
machine have same dump trace like below:
KERNEL: /usr/lib/debug/boot/vmlinux-4.4.0-104-generic
...
PANIC: "kernel BUG at .../include/linux/fs.h:2582!"
...
COMMAND: "iptables-restor"
...
crash> bt
...
[exception RIP: __fput+541]
...
#8 [ffff880199f33e60] __fput at ffffffff812125ac
#9 [ffff880199f33ea8] ____fput at ffffffff812126ee
#10 [ffff880199f33eb8] task_work_run at ffffffff8109f101
#11 [ffff880199f33ef8] exit_to_usermode_loop at ffffffff81003242
#12 [ffff880199f33f30] syscall_return_slowpath at ffffffff81003c6e
#13 [ffff880199f33f50] int_ret_from_sys_call at ffffffff818449d0
...
The above showed command "iptables-restor" cause the kernel
panic and its pid is 16884,its parent process is kube-proxy.
Sometimes the process of kernel panic is "iptables-save" and
the dump trace are same.
The kernel panic always happens every 26 days(machine uptime)
"""
<< Adding further sections as comments to keep page short. >> |
[Impact]
* Systems with aufs mounts are vulnerable to a kernel BUG(),
which can turn into a panic/crash if panic_on_oops is set.
* It is exploitable by unprivileged local users; and also
remote access operations (e.g., web server) potentially.
* This issue has also manifested in Kubernetes deployments
with a kernel panic in iptables-save or iptables-restore
after a few weeks of uptime, without user interaction.
* Usually all Kubernetes worker nodes hit the issue around
the same time.
[Fix]
* The issue is fixed with 2 patches in aufs5-standalone.git:
- 72a59459eba5 aufs: do not call i_readcount_inc()
- d8e4eb086cb4 aufs: bugfix, IMA i_readcount
* The first addresses the issue, and the second addresses a
regression in the aufs feature to change RW branches to RO.
* The kernel v5.3 aufs patches had an equivalent fix to the
second patch, which is present in the Focal aufs patchset
(and on ubuntu-unstable/master & /master-5.8 on 20200629)
- 1d26f910c53f aufs: for v5.3-rc1, maintain i_readcount
[Test Case]
* Repeatedly open/close the same file in read-only mode in
aufs (UINT_MAX times, to overflow a signed int back to 0.)
* Alternatively, monitor the underlying filesystems's file
inode.i_readcount over several open/close system calls.
(should not monotonically increase; rather, return to 0.)
[Regression Potential]
* This changes the core path that aufs opens files, so there
is a risk of regression; however, the fix changes aufs for
how other filesystems work, so this generally is OK to do.
In any case, most regressions would manifest in open() or
close() (where the VFS handles/checks inode.i_readcount.)
* The aufs maintainer has access to an internal test-suite
used to validate aufs changes, used to identify the first
regression (in the branch RW/RO mode change), and then to
validate/publish the patches upstream; should be good now.
* This has also been tested with 'stress-ng --class filesystem'
and with 'xfstests -overlay' (patch to use aufs vs overlayfs)
on Xenial/Bionic/Focal (-proposed vs. -proposed + patches).
No regressions observed in stress-ng/xfstests log or dmesg.
[Other Info]
* Applied on Unstable (branches master and master-5.8)
* Not required on Groovy (still 5.4; should sync from Unstable)
* Not required on Eoan (EOL date before SRU cycle release date)
* Required on Bionic and Focal and Xenial.
[Original Bug Description]
Problem Report:
--------------
An user reported several nodes in their Kubernetes clusters
hit a kernel panic at about the same time, and periodically
(usually 35 days of uptime, and in same order nodes booted.)
The kernel panics message/stack trace are consistent across
nodes, in __fput() by iptables-save/restore from kube-proxy.
Example:
"""
[3016161.866702] kernel BUG at .../include/linux/fs.h:2583!
[3016161.866704] invalid opcode: 0000 [#1] SMP
...
[3016161.866780] CPU: 40 PID: 33068 Comm: iptables-restor Tainted: P OE 4.4.0-133-generic #159-Ubuntu
...
[3016161.866786] RIP: 0010:[...] [...] __fput+0x223/0x230
...
[3016161.866818] Call Trace:
[3016161.866823] [...] ____fput+0xe/0x10
[3016161.866827] [...] task_work_run+0x86/0xb0
[3016161.866831] [...] exit_to_usermode_loop+0xc2/0xd0
[3016161.866833] [...] syscall_return_slowpath+0x4e/0x60
[3016161.866839] [...] int_ret_from_sys_call+0x25/0x9f
"""
(uptime: 3016161 seconds / (24*60*60) = 34.90 days)
They have provided a crashdump (privately available) used
for analysis later in this bug report.
Note: the root cause turns out to be independent of K8s,
as explained in the Root Cause section.
Related Report:
--------------
This behavior matches this public bug of another user:
https://github.com/kubernetes/kubernetes/issues/70229
"""
I have several machines happen kernel panic,and these
machine have same dump trace like below:
KERNEL: /usr/lib/debug/boot/vmlinux-4.4.0-104-generic
...
PANIC: "kernel BUG at .../include/linux/fs.h:2582!"
...
COMMAND: "iptables-restor"
...
crash> bt
...
[exception RIP: __fput+541]
...
#8 [ffff880199f33e60] __fput at ffffffff812125ac
#9 [ffff880199f33ea8] ____fput at ffffffff812126ee
#10 [ffff880199f33eb8] task_work_run at ffffffff8109f101
#11 [ffff880199f33ef8] exit_to_usermode_loop at ffffffff81003242
#12 [ffff880199f33f30] syscall_return_slowpath at ffffffff81003c6e
#13 [ffff880199f33f50] int_ret_from_sys_call at ffffffff818449d0
...
The above showed command "iptables-restor" cause the kernel
panic and its pid is 16884,its parent process is kube-proxy.
Sometimes the process of kernel panic is "iptables-save" and
the dump trace are same.
The kernel panic always happens every 26 days(machine uptime)
"""
<< Adding further sections as comments to keep page short. >> |
|
2020-06-29 13:41:35 |
Mauricio Faria de Oliveira |
linux (Ubuntu Eoan): status |
Won't Fix |
In Progress |
|
2020-06-29 13:43:23 |
Mauricio Faria de Oliveira |
description |
[Impact]
* Systems with aufs mounts are vulnerable to a kernel BUG(),
which can turn into a panic/crash if panic_on_oops is set.
* It is exploitable by unprivileged local users; and also
remote access operations (e.g., web server) potentially.
* This issue has also manifested in Kubernetes deployments
with a kernel panic in iptables-save or iptables-restore
after a few weeks of uptime, without user interaction.
* Usually all Kubernetes worker nodes hit the issue around
the same time.
[Fix]
* The issue is fixed with 2 patches in aufs5-standalone.git:
- 72a59459eba5 aufs: do not call i_readcount_inc()
- d8e4eb086cb4 aufs: bugfix, IMA i_readcount
* The first addresses the issue, and the second addresses a
regression in the aufs feature to change RW branches to RO.
* The kernel v5.3 aufs patches had an equivalent fix to the
second patch, which is present in the Focal aufs patchset
(and on ubuntu-unstable/master & /master-5.8 on 20200629)
- 1d26f910c53f aufs: for v5.3-rc1, maintain i_readcount
[Test Case]
* Repeatedly open/close the same file in read-only mode in
aufs (UINT_MAX times, to overflow a signed int back to 0.)
* Alternatively, monitor the underlying filesystems's file
inode.i_readcount over several open/close system calls.
(should not monotonically increase; rather, return to 0.)
[Regression Potential]
* This changes the core path that aufs opens files, so there
is a risk of regression; however, the fix changes aufs for
how other filesystems work, so this generally is OK to do.
In any case, most regressions would manifest in open() or
close() (where the VFS handles/checks inode.i_readcount.)
* The aufs maintainer has access to an internal test-suite
used to validate aufs changes, used to identify the first
regression (in the branch RW/RO mode change), and then to
validate/publish the patches upstream; should be good now.
* This has also been tested with 'stress-ng --class filesystem'
and with 'xfstests -overlay' (patch to use aufs vs overlayfs)
on Xenial/Bionic/Focal (-proposed vs. -proposed + patches).
No regressions observed in stress-ng/xfstests log or dmesg.
[Other Info]
* Applied on Unstable (branches master and master-5.8)
* Not required on Groovy (still 5.4; should sync from Unstable)
* Not required on Eoan (EOL date before SRU cycle release date)
* Required on Bionic and Focal and Xenial.
[Original Bug Description]
Problem Report:
--------------
An user reported several nodes in their Kubernetes clusters
hit a kernel panic at about the same time, and periodically
(usually 35 days of uptime, and in same order nodes booted.)
The kernel panics message/stack trace are consistent across
nodes, in __fput() by iptables-save/restore from kube-proxy.
Example:
"""
[3016161.866702] kernel BUG at .../include/linux/fs.h:2583!
[3016161.866704] invalid opcode: 0000 [#1] SMP
...
[3016161.866780] CPU: 40 PID: 33068 Comm: iptables-restor Tainted: P OE 4.4.0-133-generic #159-Ubuntu
...
[3016161.866786] RIP: 0010:[...] [...] __fput+0x223/0x230
...
[3016161.866818] Call Trace:
[3016161.866823] [...] ____fput+0xe/0x10
[3016161.866827] [...] task_work_run+0x86/0xb0
[3016161.866831] [...] exit_to_usermode_loop+0xc2/0xd0
[3016161.866833] [...] syscall_return_slowpath+0x4e/0x60
[3016161.866839] [...] int_ret_from_sys_call+0x25/0x9f
"""
(uptime: 3016161 seconds / (24*60*60) = 34.90 days)
They have provided a crashdump (privately available) used
for analysis later in this bug report.
Note: the root cause turns out to be independent of K8s,
as explained in the Root Cause section.
Related Report:
--------------
This behavior matches this public bug of another user:
https://github.com/kubernetes/kubernetes/issues/70229
"""
I have several machines happen kernel panic,and these
machine have same dump trace like below:
KERNEL: /usr/lib/debug/boot/vmlinux-4.4.0-104-generic
...
PANIC: "kernel BUG at .../include/linux/fs.h:2582!"
...
COMMAND: "iptables-restor"
...
crash> bt
...
[exception RIP: __fput+541]
...
#8 [ffff880199f33e60] __fput at ffffffff812125ac
#9 [ffff880199f33ea8] ____fput at ffffffff812126ee
#10 [ffff880199f33eb8] task_work_run at ffffffff8109f101
#11 [ffff880199f33ef8] exit_to_usermode_loop at ffffffff81003242
#12 [ffff880199f33f30] syscall_return_slowpath at ffffffff81003c6e
#13 [ffff880199f33f50] int_ret_from_sys_call at ffffffff818449d0
...
The above showed command "iptables-restor" cause the kernel
panic and its pid is 16884,its parent process is kube-proxy.
Sometimes the process of kernel panic is "iptables-save" and
the dump trace are same.
The kernel panic always happens every 26 days(machine uptime)
"""
<< Adding further sections as comments to keep page short. >> |
[Impact]
* Systems with aufs mounts are vulnerable to a kernel BUG(),
which can turn into a panic/crash if panic_on_oops is set.
* It is exploitable by unprivileged local users; and also
remote access operations (e.g., web server) potentially.
* This issue has also manifested in Kubernetes deployments
with a kernel panic in iptables-save or iptables-restore
after a few weeks of uptime, without user interaction.
* Usually all Kubernetes worker nodes hit the issue around
the same time.
[Fix]
* The issue is fixed with 2 patches in aufs5-standalone.git:
- 72a59459eba5 aufs: do not call i_readcount_inc()
- d8e4eb086cb4 aufs: bugfix, IMA i_readcount
* The first addresses the issue, and the second addresses a
regression in the aufs feature to change RW branches to RO.
* The kernel v5.3 aufs patches had an equivalent fix to the
second patch, which is present in the Focal aufs patchset
(and on ubuntu-unstable/master & /master-5.8 on 20200629)
- 1d26f910c53f aufs: for v5.3-rc1, maintain i_readcount
[Test Case]
* Repeatedly open/close the same file in read-only mode in
aufs (UINT_MAX times, to overflow a signed int back to 0.)
* Alternatively, monitor the underlying filesystems's file
inode.i_readcount over several open/close system calls.
(should not monotonically increase; rather, return to 0.)
[Regression Potential]
* This changes the core path that aufs opens files, so there
is a risk of regression; however, the fix changes aufs for
how other filesystems work, so this generally is OK to do.
In any case, most regressions would manifest in open() or
close() (where the VFS handles/checks inode.i_readcount.)
* The aufs maintainer has access to an internal test-suite
used to validate aufs changes, used to identify the first
regression (in the branch RW/RO mode change), and then to
validate/publish the patches upstream; should be good now.
* This has also been tested with 'stress-ng --class filesystem'
and with 'xfstests -overlay' (patch to use aufs vs overlayfs)
on Xenial/Bionic/Focal (-proposed vs. -proposed + patches).
No regressions observed in stress-ng/xfstests log or dmesg.
[Other Info]
* Applied on Unstable (branches master and master-5.8)
* Not required on Groovy (still 5.4; should sync from Unstable)
* Required on LTS releases: Bionic and Focal and Xenial.
* Required on other releases: Disco and Eoan (for custom kernels)
[Original Bug Description]
Problem Report:
--------------
An user reported several nodes in their Kubernetes clusters
hit a kernel panic at about the same time, and periodically
(usually 35 days of uptime, and in same order nodes booted.)
The kernel panics message/stack trace are consistent across
nodes, in __fput() by iptables-save/restore from kube-proxy.
Example:
"""
[3016161.866702] kernel BUG at .../include/linux/fs.h:2583!
[3016161.866704] invalid opcode: 0000 [#1] SMP
...
[3016161.866780] CPU: 40 PID: 33068 Comm: iptables-restor Tainted: P OE 4.4.0-133-generic #159-Ubuntu
...
[3016161.866786] RIP: 0010:[...] [...] __fput+0x223/0x230
...
[3016161.866818] Call Trace:
[3016161.866823] [...] ____fput+0xe/0x10
[3016161.866827] [...] task_work_run+0x86/0xb0
[3016161.866831] [...] exit_to_usermode_loop+0xc2/0xd0
[3016161.866833] [...] syscall_return_slowpath+0x4e/0x60
[3016161.866839] [...] int_ret_from_sys_call+0x25/0x9f
"""
(uptime: 3016161 seconds / (24*60*60) = 34.90 days)
They have provided a crashdump (privately available) used
for analysis later in this bug report.
Note: the root cause turns out to be independent of K8s,
as explained in the Root Cause section.
Related Report:
--------------
This behavior matches this public bug of another user:
https://github.com/kubernetes/kubernetes/issues/70229
"""
I have several machines happen kernel panic,and these
machine have same dump trace like below:
KERNEL: /usr/lib/debug/boot/vmlinux-4.4.0-104-generic
...
PANIC: "kernel BUG at .../include/linux/fs.h:2582!"
...
COMMAND: "iptables-restor"
...
crash> bt
...
[exception RIP: __fput+541]
...
#8 [ffff880199f33e60] __fput at ffffffff812125ac
#9 [ffff880199f33ea8] ____fput at ffffffff812126ee
#10 [ffff880199f33eb8] task_work_run at ffffffff8109f101
#11 [ffff880199f33ef8] exit_to_usermode_loop at ffffffff81003242
#12 [ffff880199f33f30] syscall_return_slowpath at ffffffff81003c6e
#13 [ffff880199f33f50] int_ret_from_sys_call at ffffffff818449d0
...
The above showed command "iptables-restor" cause the kernel
panic and its pid is 16884,its parent process is kube-proxy.
Sometimes the process of kernel panic is "iptables-save" and
the dump trace are same.
The kernel panic always happens every 26 days(machine uptime)
"""
<< Adding further sections as comments to keep page short. >> |
|
2020-06-29 13:49:39 |
Mauricio Faria de Oliveira |
description |
[Impact]
* Systems with aufs mounts are vulnerable to a kernel BUG(),
which can turn into a panic/crash if panic_on_oops is set.
* It is exploitable by unprivileged local users; and also
remote access operations (e.g., web server) potentially.
* This issue has also manifested in Kubernetes deployments
with a kernel panic in iptables-save or iptables-restore
after a few weeks of uptime, without user interaction.
* Usually all Kubernetes worker nodes hit the issue around
the same time.
[Fix]
* The issue is fixed with 2 patches in aufs5-standalone.git:
- 72a59459eba5 aufs: do not call i_readcount_inc()
- d8e4eb086cb4 aufs: bugfix, IMA i_readcount
* The first addresses the issue, and the second addresses a
regression in the aufs feature to change RW branches to RO.
* The kernel v5.3 aufs patches had an equivalent fix to the
second patch, which is present in the Focal aufs patchset
(and on ubuntu-unstable/master & /master-5.8 on 20200629)
- 1d26f910c53f aufs: for v5.3-rc1, maintain i_readcount
[Test Case]
* Repeatedly open/close the same file in read-only mode in
aufs (UINT_MAX times, to overflow a signed int back to 0.)
* Alternatively, monitor the underlying filesystems's file
inode.i_readcount over several open/close system calls.
(should not monotonically increase; rather, return to 0.)
[Regression Potential]
* This changes the core path that aufs opens files, so there
is a risk of regression; however, the fix changes aufs for
how other filesystems work, so this generally is OK to do.
In any case, most regressions would manifest in open() or
close() (where the VFS handles/checks inode.i_readcount.)
* The aufs maintainer has access to an internal test-suite
used to validate aufs changes, used to identify the first
regression (in the branch RW/RO mode change), and then to
validate/publish the patches upstream; should be good now.
* This has also been tested with 'stress-ng --class filesystem'
and with 'xfstests -overlay' (patch to use aufs vs overlayfs)
on Xenial/Bionic/Focal (-proposed vs. -proposed + patches).
No regressions observed in stress-ng/xfstests log or dmesg.
[Other Info]
* Applied on Unstable (branches master and master-5.8)
* Not required on Groovy (still 5.4; should sync from Unstable)
* Required on LTS releases: Bionic and Focal and Xenial.
* Required on other releases: Disco and Eoan (for custom kernels)
[Original Bug Description]
Problem Report:
--------------
An user reported several nodes in their Kubernetes clusters
hit a kernel panic at about the same time, and periodically
(usually 35 days of uptime, and in same order nodes booted.)
The kernel panics message/stack trace are consistent across
nodes, in __fput() by iptables-save/restore from kube-proxy.
Example:
"""
[3016161.866702] kernel BUG at .../include/linux/fs.h:2583!
[3016161.866704] invalid opcode: 0000 [#1] SMP
...
[3016161.866780] CPU: 40 PID: 33068 Comm: iptables-restor Tainted: P OE 4.4.0-133-generic #159-Ubuntu
...
[3016161.866786] RIP: 0010:[...] [...] __fput+0x223/0x230
...
[3016161.866818] Call Trace:
[3016161.866823] [...] ____fput+0xe/0x10
[3016161.866827] [...] task_work_run+0x86/0xb0
[3016161.866831] [...] exit_to_usermode_loop+0xc2/0xd0
[3016161.866833] [...] syscall_return_slowpath+0x4e/0x60
[3016161.866839] [...] int_ret_from_sys_call+0x25/0x9f
"""
(uptime: 3016161 seconds / (24*60*60) = 34.90 days)
They have provided a crashdump (privately available) used
for analysis later in this bug report.
Note: the root cause turns out to be independent of K8s,
as explained in the Root Cause section.
Related Report:
--------------
This behavior matches this public bug of another user:
https://github.com/kubernetes/kubernetes/issues/70229
"""
I have several machines happen kernel panic,and these
machine have same dump trace like below:
KERNEL: /usr/lib/debug/boot/vmlinux-4.4.0-104-generic
...
PANIC: "kernel BUG at .../include/linux/fs.h:2582!"
...
COMMAND: "iptables-restor"
...
crash> bt
...
[exception RIP: __fput+541]
...
#8 [ffff880199f33e60] __fput at ffffffff812125ac
#9 [ffff880199f33ea8] ____fput at ffffffff812126ee
#10 [ffff880199f33eb8] task_work_run at ffffffff8109f101
#11 [ffff880199f33ef8] exit_to_usermode_loop at ffffffff81003242
#12 [ffff880199f33f30] syscall_return_slowpath at ffffffff81003c6e
#13 [ffff880199f33f50] int_ret_from_sys_call at ffffffff818449d0
...
The above showed command "iptables-restor" cause the kernel
panic and its pid is 16884,its parent process is kube-proxy.
Sometimes the process of kernel panic is "iptables-save" and
the dump trace are same.
The kernel panic always happens every 26 days(machine uptime)
"""
<< Adding further sections as comments to keep page short. >> |
[Impact]
* Systems with aufs mounts are vulnerable to a kernel BUG(),
which can turn into a panic/crash if panic_on_oops is set.
* It is exploitable by unprivileged local users; and also
remote access operations (e.g., web server) potentially.
* This issue has also manifested in Kubernetes deployments
with a kernel panic in iptables-save or iptables-restore
after a few weeks of uptime, without user interaction.
* Usually all Kubernetes worker nodes hit the issue around
the same time.
[Fix]
* The issue is fixed with 2 patches in aufs5-linux.git:
- 72a59459eba5 aufs: do not call i_readcount_inc()
- d8e4eb086cb4 aufs: bugfix, IMA i_readcount
* The first addresses the issue, and the second addresses a
regression in the aufs feature to change RW branches to RO.
* The kernel v5.3 aufs patches had an equivalent fix to the
second patch, which is present in the Focal aufs patchset
(and on ubuntu-unstable/master & /master-5.8 on 20200629)
- 1d26f910c53f aufs: for v5.3-rc1, maintain i_readcount
[Test Case]
* Repeatedly open/close the same file in read-only mode in
aufs (UINT_MAX times, to overflow a signed int back to 0.)
* Alternatively, monitor the underlying filesystems's file
inode.i_readcount over several open/close system calls.
(should not monotonically increase; rather, return to 0.)
[Regression Potential]
* This changes the core path that aufs opens files, so there
is a risk of regression; however, the fix changes aufs for
how other filesystems work, so this generally is OK to do.
In any case, most regressions would manifest in open() or
close() (where the VFS handles/checks inode.i_readcount.)
* The aufs maintainer has access to an internal test-suite
used to validate aufs changes, used to identify the first
regression (in the branch RW/RO mode change), and then to
validate/publish the patches upstream; should be good now.
* This has also been tested with 'stress-ng --class filesystem'
and with 'xfstests -overlay' (patch to use aufs vs overlayfs)
on Xenial/Bionic/Focal (-proposed vs. -proposed + patches).
No regressions observed in stress-ng/xfstests log or dmesg.
[Other Info]
* Applied on Unstable (branches master and master-5.8)
* Not required on Groovy (still 5.4; should sync from Unstable)
* Required on LTS releases: Bionic and Focal and Xenial.
* Required on other releases: Disco and Eoan (for custom kernels)
[Original Bug Description]
Problem Report:
--------------
An user reported several nodes in their Kubernetes clusters
hit a kernel panic at about the same time, and periodically
(usually 35 days of uptime, and in same order nodes booted.)
The kernel panics message/stack trace are consistent across
nodes, in __fput() by iptables-save/restore from kube-proxy.
Example:
"""
[3016161.866702] kernel BUG at .../include/linux/fs.h:2583!
[3016161.866704] invalid opcode: 0000 [#1] SMP
...
[3016161.866780] CPU: 40 PID: 33068 Comm: iptables-restor Tainted: P OE 4.4.0-133-generic #159-Ubuntu
...
[3016161.866786] RIP: 0010:[...] [...] __fput+0x223/0x230
...
[3016161.866818] Call Trace:
[3016161.866823] [...] ____fput+0xe/0x10
[3016161.866827] [...] task_work_run+0x86/0xb0
[3016161.866831] [...] exit_to_usermode_loop+0xc2/0xd0
[3016161.866833] [...] syscall_return_slowpath+0x4e/0x60
[3016161.866839] [...] int_ret_from_sys_call+0x25/0x9f
"""
(uptime: 3016161 seconds / (24*60*60) = 34.90 days)
They have provided a crashdump (privately available) used
for analysis later in this bug report.
Note: the root cause turns out to be independent of K8s,
as explained in the Root Cause section.
Related Report:
--------------
This behavior matches this public bug of another user:
https://github.com/kubernetes/kubernetes/issues/70229
"""
I have several machines happen kernel panic,and these
machine have same dump trace like below:
KERNEL: /usr/lib/debug/boot/vmlinux-4.4.0-104-generic
...
PANIC: "kernel BUG at .../include/linux/fs.h:2582!"
...
COMMAND: "iptables-restor"
...
crash> bt
...
[exception RIP: __fput+541]
...
#8 [ffff880199f33e60] __fput at ffffffff812125ac
#9 [ffff880199f33ea8] ____fput at ffffffff812126ee
#10 [ffff880199f33eb8] task_work_run at ffffffff8109f101
#11 [ffff880199f33ef8] exit_to_usermode_loop at ffffffff81003242
#12 [ffff880199f33f30] syscall_return_slowpath at ffffffff81003c6e
#13 [ffff880199f33f50] int_ret_from_sys_call at ffffffff818449d0
...
The above showed command "iptables-restor" cause the kernel
panic and its pid is 16884,its parent process is kube-proxy.
Sometimes the process of kernel panic is "iptables-save" and
the dump trace are same.
The kernel panic always happens every 26 days(machine uptime)
"""
<< Adding further sections as comments to keep page short. >> |
|
2020-06-29 14:46:13 |
Mauricio Faria de Oliveira |
description |
[Impact]
* Systems with aufs mounts are vulnerable to a kernel BUG(),
which can turn into a panic/crash if panic_on_oops is set.
* It is exploitable by unprivileged local users; and also
remote access operations (e.g., web server) potentially.
* This issue has also manifested in Kubernetes deployments
with a kernel panic in iptables-save or iptables-restore
after a few weeks of uptime, without user interaction.
* Usually all Kubernetes worker nodes hit the issue around
the same time.
[Fix]
* The issue is fixed with 2 patches in aufs5-linux.git:
- 72a59459eba5 aufs: do not call i_readcount_inc()
- d8e4eb086cb4 aufs: bugfix, IMA i_readcount
* The first addresses the issue, and the second addresses a
regression in the aufs feature to change RW branches to RO.
* The kernel v5.3 aufs patches had an equivalent fix to the
second patch, which is present in the Focal aufs patchset
(and on ubuntu-unstable/master & /master-5.8 on 20200629)
- 1d26f910c53f aufs: for v5.3-rc1, maintain i_readcount
[Test Case]
* Repeatedly open/close the same file in read-only mode in
aufs (UINT_MAX times, to overflow a signed int back to 0.)
* Alternatively, monitor the underlying filesystems's file
inode.i_readcount over several open/close system calls.
(should not monotonically increase; rather, return to 0.)
[Regression Potential]
* This changes the core path that aufs opens files, so there
is a risk of regression; however, the fix changes aufs for
how other filesystems work, so this generally is OK to do.
In any case, most regressions would manifest in open() or
close() (where the VFS handles/checks inode.i_readcount.)
* The aufs maintainer has access to an internal test-suite
used to validate aufs changes, used to identify the first
regression (in the branch RW/RO mode change), and then to
validate/publish the patches upstream; should be good now.
* This has also been tested with 'stress-ng --class filesystem'
and with 'xfstests -overlay' (patch to use aufs vs overlayfs)
on Xenial/Bionic/Focal (-proposed vs. -proposed + patches).
No regressions observed in stress-ng/xfstests log or dmesg.
[Other Info]
* Applied on Unstable (branches master and master-5.8)
* Not required on Groovy (still 5.4; should sync from Unstable)
* Required on LTS releases: Bionic and Focal and Xenial.
* Required on other releases: Disco and Eoan (for custom kernels)
[Original Bug Description]
Problem Report:
--------------
An user reported several nodes in their Kubernetes clusters
hit a kernel panic at about the same time, and periodically
(usually 35 days of uptime, and in same order nodes booted.)
The kernel panics message/stack trace are consistent across
nodes, in __fput() by iptables-save/restore from kube-proxy.
Example:
"""
[3016161.866702] kernel BUG at .../include/linux/fs.h:2583!
[3016161.866704] invalid opcode: 0000 [#1] SMP
...
[3016161.866780] CPU: 40 PID: 33068 Comm: iptables-restor Tainted: P OE 4.4.0-133-generic #159-Ubuntu
...
[3016161.866786] RIP: 0010:[...] [...] __fput+0x223/0x230
...
[3016161.866818] Call Trace:
[3016161.866823] [...] ____fput+0xe/0x10
[3016161.866827] [...] task_work_run+0x86/0xb0
[3016161.866831] [...] exit_to_usermode_loop+0xc2/0xd0
[3016161.866833] [...] syscall_return_slowpath+0x4e/0x60
[3016161.866839] [...] int_ret_from_sys_call+0x25/0x9f
"""
(uptime: 3016161 seconds / (24*60*60) = 34.90 days)
They have provided a crashdump (privately available) used
for analysis later in this bug report.
Note: the root cause turns out to be independent of K8s,
as explained in the Root Cause section.
Related Report:
--------------
This behavior matches this public bug of another user:
https://github.com/kubernetes/kubernetes/issues/70229
"""
I have several machines happen kernel panic,and these
machine have same dump trace like below:
KERNEL: /usr/lib/debug/boot/vmlinux-4.4.0-104-generic
...
PANIC: "kernel BUG at .../include/linux/fs.h:2582!"
...
COMMAND: "iptables-restor"
...
crash> bt
...
[exception RIP: __fput+541]
...
#8 [ffff880199f33e60] __fput at ffffffff812125ac
#9 [ffff880199f33ea8] ____fput at ffffffff812126ee
#10 [ffff880199f33eb8] task_work_run at ffffffff8109f101
#11 [ffff880199f33ef8] exit_to_usermode_loop at ffffffff81003242
#12 [ffff880199f33f30] syscall_return_slowpath at ffffffff81003c6e
#13 [ffff880199f33f50] int_ret_from_sys_call at ffffffff818449d0
...
The above showed command "iptables-restor" cause the kernel
panic and its pid is 16884,its parent process is kube-proxy.
Sometimes the process of kernel panic is "iptables-save" and
the dump trace are same.
The kernel panic always happens every 26 days(machine uptime)
"""
<< Adding further sections as comments to keep page short. >> |
[Impact]
* Systems with aufs mounts are vulnerable to a kernel BUG(),
which can turn into a panic/crash if panic_on_oops is set.
* It is exploitable by unprivileged local users; and also
remote access operations (e.g., web server) potentially.
* This issue has also manifested in Kubernetes deployments
with a kernel panic in iptables-save or iptables-restore
after a few weeks of uptime, without user interaction.
* Usually all Kubernetes worker nodes hit the issue around
the same time.
[Fix]
* The issue is fixed with 2 patches in aufs4-linux.git:
- 515a586eeef3 aufs: do not call i_readcount_inc()
- f10aea57d39d aufs: bugfix, IMA i_readcount
* The first addresses the issue, and the second addresses a
regression in the aufs feature to change RW branches to RO.
* The kernel v5.3 aufs patches had an equivalent fix to the
second patch, which is present in the Focal aufs patchset
(and on ubuntu-unstable/master & /master-5.8 on 20200629)
- 1d26f910c53f aufs: for v5.3-rc1, maintain i_readcount
(in aufs5-linux.git)
[Test Case]
* Repeatedly open/close the same file in read-only mode in
aufs (UINT_MAX times, to overflow a signed int back to 0.)
* Alternatively, monitor the underlying filesystems's file
inode.i_readcount over several open/close system calls.
(should not monotonically increase; rather, return to 0.)
[Regression Potential]
* This changes the core path that aufs opens files, so there
is a risk of regression; however, the fix changes aufs for
how other filesystems work, so this generally is OK to do.
In any case, most regressions would manifest in open() or
close() (where the VFS handles/checks inode.i_readcount.)
* The aufs maintainer has access to an internal test-suite
used to validate aufs changes, used to identify the first
regression (in the branch RW/RO mode change), and then to
validate/publish the patches upstream; should be good now.
* This has also been tested with 'stress-ng --class filesystem'
and with 'xfstests -overlay' (patch to use aufs vs overlayfs)
on Xenial/Bionic/Focal (-proposed vs. -proposed + patches).
No regressions observed in stress-ng/xfstests log or dmesg.
[Other Info]
* Applied on Unstable (branches master and master-5.8)
* Not required on Groovy (still 5.4; should sync from Unstable)
* Required on LTS releases: Bionic and Focal and Xenial.
* Required on other releases: Disco and Eoan (for custom kernels)
[Original Bug Description]
Problem Report:
--------------
An user reported several nodes in their Kubernetes clusters
hit a kernel panic at about the same time, and periodically
(usually 35 days of uptime, and in same order nodes booted.)
The kernel panics message/stack trace are consistent across
nodes, in __fput() by iptables-save/restore from kube-proxy.
Example:
"""
[3016161.866702] kernel BUG at .../include/linux/fs.h:2583!
[3016161.866704] invalid opcode: 0000 [#1] SMP
...
[3016161.866780] CPU: 40 PID: 33068 Comm: iptables-restor Tainted: P OE 4.4.0-133-generic #159-Ubuntu
...
[3016161.866786] RIP: 0010:[...] [...] __fput+0x223/0x230
...
[3016161.866818] Call Trace:
[3016161.866823] [...] ____fput+0xe/0x10
[3016161.866827] [...] task_work_run+0x86/0xb0
[3016161.866831] [...] exit_to_usermode_loop+0xc2/0xd0
[3016161.866833] [...] syscall_return_slowpath+0x4e/0x60
[3016161.866839] [...] int_ret_from_sys_call+0x25/0x9f
"""
(uptime: 3016161 seconds / (24*60*60) = 34.90 days)
They have provided a crashdump (privately available) used
for analysis later in this bug report.
Note: the root cause turns out to be independent of K8s,
as explained in the Root Cause section.
Related Report:
--------------
This behavior matches this public bug of another user:
https://github.com/kubernetes/kubernetes/issues/70229
"""
I have several machines happen kernel panic,and these
machine have same dump trace like below:
KERNEL: /usr/lib/debug/boot/vmlinux-4.4.0-104-generic
...
PANIC: "kernel BUG at .../include/linux/fs.h:2582!"
...
COMMAND: "iptables-restor"
...
crash> bt
...
[exception RIP: __fput+541]
...
#8 [ffff880199f33e60] __fput at ffffffff812125ac
#9 [ffff880199f33ea8] ____fput at ffffffff812126ee
#10 [ffff880199f33eb8] task_work_run at ffffffff8109f101
#11 [ffff880199f33ef8] exit_to_usermode_loop at ffffffff81003242
#12 [ffff880199f33f30] syscall_return_slowpath at ffffffff81003c6e
#13 [ffff880199f33f50] int_ret_from_sys_call at ffffffff818449d0
...
The above showed command "iptables-restor" cause the kernel
panic and its pid is 16884,its parent process is kube-proxy.
Sometimes the process of kernel panic is "iptables-save" and
the dump trace are same.
The kernel panic always happens every 26 days(machine uptime)
"""
<< Adding further sections as comments to keep page short. >> |
|
2020-07-09 07:01:06 |
Alex Murray |
information type |
Private Security |
Public Security |
|
2020-07-09 08:26:11 |
Ubuntu Foundations Team Bug Bot |
tags |
sts |
patch sts |
|
2020-07-22 13:49:22 |
Mauricio Faria de Oliveira |
linux (Ubuntu Bionic): status |
In Progress |
Fix Released |
|
2020-07-22 13:49:27 |
Mauricio Faria de Oliveira |
linux (Ubuntu Focal): status |
In Progress |
Fix Released |
|
2020-07-22 13:49:43 |
Mauricio Faria de Oliveira |
nominated for series |
|
Ubuntu Xenial |
|
2020-07-22 13:49:43 |
Mauricio Faria de Oliveira |
bug task added |
|
linux (Ubuntu Xenial) |
|
2020-07-22 13:49:49 |
Mauricio Faria de Oliveira |
linux (Ubuntu Xenial): status |
New |
Fix Released |
|
2020-07-22 13:49:52 |
Mauricio Faria de Oliveira |
linux (Ubuntu Xenial): importance |
Undecided |
Medium |
|
2020-07-22 13:49:55 |
Mauricio Faria de Oliveira |
linux (Ubuntu Xenial): assignee |
|
Mauricio Faria de Oliveira (mfo) |
|
2020-07-22 13:50:02 |
Mauricio Faria de Oliveira |
linux (Ubuntu Eoan): status |
In Progress |
Fix Committed |
|
2020-08-18 16:59:40 |
Brian Murray |
linux (Ubuntu Eoan): status |
Fix Committed |
Won't Fix |
|
2022-09-14 13:44:12 |
Mauricio Faria de Oliveira |
linux (Ubuntu): status |
In Progress |
Fix Released |
|