Regarding #2 in comment #8 - I found that we can more or less do this with few simple modifications to SQUASHFS_DECOMP_MULTI. The config options are upper bounds on the number of decompressors and data cache blocks. I tested this with the mounted-fs-memory-checker for comparison, limiting squashfs to 1 data cache block and 4 decompressors per super block (and with CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE=1). Here's what I got for the "heavy" filesystems on a 2-core VM:
I expect this is identical to what we'd get with the kernel from comment #7, and is probably the minimum we can expect (2 * fs_block_size).
I want to do some performance comparison between these kernels and 4.4.0-47.68, and to get some idea as for how often squashfs has to fall back to using the data cache rather than decompressing into the page cache directly.
My most recent build (with one block in the data cache, one block in the fragment cache, and a maximum of 4 parallel decompressors) can be found at
Regarding #2 in comment #8 - I found that we can more or less do this with few simple modifications to SQUASHFS_ DECOMP_ MULTI. The config options are upper bounds on the number of decompressors and data cache blocks. I tested this with the mounted- fs-memory- checker for comparison, limiting squashfs to 1 data cache block and 4 decompressors per super block (and with CONFIG_ SQUASHFS_ FRAGMENT_ CACHE_SIZE= 1). Here's what I got for the "heavy" filesystems on a 2-core VM:
size-0m. squashfs. xz.heavy squashfs. xz.heavy
# num-mounted extra-memory delta
0: 39.45MB
1: 39.85MB (delta: 0.40MB)
2: 41.91MB (delta: 2.06MB)
3: 43.99MB (delta: 2.07MB)
4: 46.06MB (delta: 2.08MB)
size-1m.
# num-mounted extra-memory delta
0: 39.45MB
1: 39.85MB (delta: 0.40MB)
2: 41.91MB (delta: 2.06MB)
3: 43.97MB (delta: 2.06MB)
4: 46.04MB (delta: 2.06MB)
I expect this is identical to what we'd get with the kernel from comment #7, and is probably the minimum we can expect (2 * fs_block_size).
I want to do some performance comparison between these kernels and 4.4.0-47.68, and to get some idea as for how often squashfs has to fall back to using the data cache rather than decompressing into the page cache directly.
My most recent build (with one block in the data cache, one block in the fragment cache, and a maximum of 4 parallel decompressors) can be found at
http:// people. canonical. com/~sforshee/ lp1636847/ linux-4. 4.0-47. 68+lp1636847v20 1611101005/