Probable regression with EXT3 file systems and CVE-2018-1093 patches
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
linux (Ubuntu) |
Fix Released
|
Critical
|
Unassigned | ||
Trusty |
Fix Released
|
Critical
|
Unassigned |
Bug Description
== SRU Justification ==
Mainline commit 7dac4a1726a9 introduced a regression in v4.17-rc1, which
made it's way into Trusty via upstream stable updates. This regression
is resolved by mainline commit 22be37acce25. This commit has been cc'd
to upstream stable, but has not made it's way into Trusty as of yet.
== Fix ==
22be37acce25 ("ext4: fix bitmap position validation")
== Regression Potential ==
Low. This commit has been cc'd to upstream stable, so it has had
additional upstream review.
== Test Case ==
A test kernel was built with this patch and tested by the original bug reporter.
The bug reporter states the test kernel resolved the bug.
A customer reported on all of their ext3 and none of their ext4 systems that the file system was in read-only mode, I believe after rebooting into 3.13.0-157.207 from 3.13.0-156.206. Here is the output of tune2fs -l for one of the file systems:
tune2fs 1.42.12 (29-Aug-2014)
Last mounted on: /
Filesystem UUID: 748f503a-
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags: signed_
Default mount options: (none)
Filesystem state: clean with errors
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 1966080
Block count: 7863296
Reserved block count: 393164
Free blocks: 4568472
Free inodes: 1440187
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 1022
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
RAID stride: 128
RAID stripe width: 512
Filesystem created: Thu Feb 25 21:54:24 2016
Last mount time: Fri Aug 24 07:40:51 2018
Last write time: Fri Aug 24 07:40:51 2018
Mount count: 1
Maximum mount count: 25
Last checked: Fri Aug 24 07:38:54 2018
Check interval: 15552000 (6 months)
Next check after: Wed Feb 20 07:38:54 2019
Lifetime writes: 7381 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: d6564a54-
Journal backup: inode blocks
FS Error count: 210
First error time: Fri Aug 24 07:40:51 2018
First error function: ext4_validate_
First error line #: 376
First error inode #: 0
First error block #: 0
Last error time: Sun Aug 26 19:35:16 2018
Last error function: ext4_remount
Last error line #: 4833
Last error inode #: 0
Last error block #: 0
CVE References
Changed in linux (Ubuntu): | |
assignee: | nobody → Joseph Salisbury (jsalisbury) |
Changed in linux (Ubuntu Trusty): | |
assignee: | nobody → Joseph Salisbury (jsalisbury) |
Changed in linux (Ubuntu): | |
status: | Triaged → In Progress |
Changed in linux (Ubuntu Trusty): | |
status: | Triaged → In Progress |
Changed in linux (Ubuntu): | |
importance: | High → Critical |
Changed in linux (Ubuntu Trusty): | |
importance: | High → Critical |
Changed in linux (Ubuntu Trusty): | |
status: | In Progress → Fix Committed |
Changed in linux (Ubuntu): | |
status: | In Progress → Fix Released |
tags: | added: cscc |
I noticed the unusual thing here is:
RAID stride: 128
RAID stripe width: 512
I'm going to do some testing around this to see if it's related. It could be this is the issue and not anything EXT3 vs. EXT4.