I confirmed that I can create a squashfs larger than 2GB (used 2.1GB), then append a file to it (used a 50MB file) and the resulting squashfs can be unpacked just fine.
```
root@xenial-32:~# dd if=/dev/urandom of=big bs=10M count=210
210+0 records in
210+0 records out
2202009600 bytes (2.2 GB, 2.1 GiB) copied, 147.973 s, 14.9 MB/s
root@xenial-32:~# mksquashfs big big.squashfs
Parallel mksquashfs: Using 8 processors
Creating 4.0 filesystem on big.squashfs, block size 131072.
[==========================================================================================================================================/] 16800/16800 100%
Exportable Squashfs 4.0 filesystem, gzip compressed, data block size 131072
compressed data, compressed metadata, compressed fragments, compressed xattrs
duplicates are removed
Filesystem size 2150400.52 Kbytes (2100.00 Mbytes)
100.00% of uncompressed filesystem size (2150465.88 Kbytes)
Inode table size 353 bytes (0.34 Kbytes)
0.52% of uncompressed inode table size (67282 bytes)
Directory table size 21 bytes (0.02 Kbytes)
84.00% of uncompressed directory table size (25 bytes)
Number of duplicate files found 0
Number of inodes 2
Number of files 1
Number of fragments 1
Number of symbolic links 0
Number of device nodes 0
Number of fifo nodes 0
Number of socket nodes 0
Number of directories 1
Number of ids (unique uids + gids) 1
Number of uids 1
root (0)
Number of gids 1
root (0)
root@xenial-32:~# dd if=/dev/urandom of=small bs=10M count=5
5+0 records in
5+0 records out
52428800 bytes (52 MB, 50 MiB) copied, 3.5235 s, 14.9 MB/s
root@xenial-32:~# mksquashfs small big.squashfs
Found a valid exportable SQUASHFS superblock on big.squashfs.
Compression used gzip
Inodes are compressed
Data is compressed
Fragments are compressed
Xattrs are compressed
Fragments are present in the filesystem
Always-use-fragments option is not specified
Duplicates are removed
Xattrs are stored
Filesystem size 2150400.52 Kbytes (2100.00 Mbytes)
Block size 131072
Number of fragments 1
Number of inodes 2
Number of ids 1
Parallel mksquashfs: Using 8 processors
Scanning existing filesystem...
Read existing filesystem, 1 inodes scanned
Appending to existing 4.0 filesystem on big.squashfs, block size 131072
All -b, -noI, -noD, -noF, -noX, no-duplicates, no-fragments, -always-use-fragments,
-exportable and -comp options ignored
If appending is not wanted, please re-run with -noappend specified!
Recovery file "squashfs_recovery_big.squashfs_256" written
If Mksquashfs aborts abnormally (i.e. power failure), run
mksquashfs dummy big.squashfs -recover squashfs_recovery_big.squashfs_256
to restore filesystem
Exportable Squashfs 4.0 filesystem, gzip compressed, data block size 131072
compressed data, compressed metadata, compressed fragments, compressed xattrs
duplicates are removed
Filesystem size 2201600.58 Kbytes (2150.00 Mbytes)
100.00% of uncompressed filesystem size (2201667.49 Kbytes)
Inode table size 388 bytes (0.38 Kbytes)
0.56% of uncompressed inode table size (68898 bytes)
Directory table size 40 bytes (0.04 Kbytes)
80.00% of uncompressed directory table size (50 bytes)
Number of duplicate files found 0
Number of inodes 3
Number of files 2
Number of fragments 1
Number of symbolic links 0
Number of device nodes 0
Number of fifo nodes 0
Number of socket nodes 0
Number of directories 1
Number of ids (unique uids + gids) 1
Number of uids 1
root (0)
Number of gids 1
root (0)
root@xenial-32:~# ls
big big.squashfs small
root@xenial-32:~# mkdir a
root@xenial-32:~# cd a
root@xenial-32:~/a# unsquashfs ../big.squashfs
Parallel unsquashfs: Using 8 processors
2 inodes (17200 blocks) to write
I confirmed that I can create a squashfs larger than 2GB (used 2.1GB), then append a file to it (used a 50MB file) and the resulting squashfs can be unpacked just fine.
``` ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======/ ] 16800/16800 100%
root@xenial-32:~# dd if=/dev/urandom of=big bs=10M count=210
210+0 records in
210+0 records out
2202009600 bytes (2.2 GB, 2.1 GiB) copied, 147.973 s, 14.9 MB/s
root@xenial-32:~# mksquashfs big big.squashfs
Parallel mksquashfs: Using 8 processors
Creating 4.0 filesystem on big.squashfs, block size 131072.
[======
Exportable Squashfs 4.0 filesystem, gzip compressed, data block size 131072 use-fragments option is not specified
compressed data, compressed metadata, compressed fragments, compressed xattrs
duplicates are removed
Filesystem size 2150400.52 Kbytes (2100.00 Mbytes)
100.00% of uncompressed filesystem size (2150465.88 Kbytes)
Inode table size 353 bytes (0.34 Kbytes)
0.52% of uncompressed inode table size (67282 bytes)
Directory table size 21 bytes (0.02 Kbytes)
84.00% of uncompressed directory table size (25 bytes)
Number of duplicate files found 0
Number of inodes 2
Number of files 1
Number of fragments 1
Number of symbolic links 0
Number of device nodes 0
Number of fifo nodes 0
Number of socket nodes 0
Number of directories 1
Number of ids (unique uids + gids) 1
Number of uids 1
root (0)
Number of gids 1
root (0)
root@xenial-32:~# dd if=/dev/urandom of=small bs=10M count=5
5+0 records in
5+0 records out
52428800 bytes (52 MB, 50 MiB) copied, 3.5235 s, 14.9 MB/s
root@xenial-32:~# mksquashfs small big.squashfs
Found a valid exportable SQUASHFS superblock on big.squashfs.
Compression used gzip
Inodes are compressed
Data is compressed
Fragments are compressed
Xattrs are compressed
Fragments are present in the filesystem
Always-
Duplicates are removed
Xattrs are stored
Filesystem size 2150400.52 Kbytes (2100.00 Mbytes)
Block size 131072
Number of fragments 1
Number of inodes 2
Number of ids 1
Parallel mksquashfs: Using 8 processors use-fragments,
Scanning existing filesystem...
Read existing filesystem, 1 inodes scanned
Appending to existing 4.0 filesystem on big.squashfs, block size 131072
All -b, -noI, -noD, -noF, -noX, no-duplicates, no-fragments, -always-
-exportable and -comp options ignored
If appending is not wanted, please re-run with -noappend specified!
Recovery file "squashfs_ recovery_ big.squashfs_ 256" written recovery_ big.squashfs_ 256
If Mksquashfs aborts abnormally (i.e. power failure), run
mksquashfs dummy big.squashfs -recover squashfs_
to restore filesystem
[====== ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ===-] 400/400 100%
Exportable Squashfs 4.0 filesystem, gzip compressed, data block size 131072
compressed data, compressed metadata, compressed fragments, compressed xattrs
duplicates are removed
Filesystem size 2201600.58 Kbytes (2150.00 Mbytes)
100.00% of uncompressed filesystem size (2201667.49 Kbytes)
Inode table size 388 bytes (0.38 Kbytes)
0.56% of uncompressed inode table size (68898 bytes)
Directory table size 40 bytes (0.04 Kbytes)
80.00% of uncompressed directory table size (50 bytes)
Number of duplicate files found 0
Number of inodes 3
Number of files 2
Number of fragments 1
Number of symbolic links 0
Number of device nodes 0
Number of fifo nodes 0
Number of socket nodes 0
Number of directories 1
Number of ids (unique uids + gids) 1
Number of uids 1
root (0)
Number of gids 1
root (0)
root@xenial-32:~# ls
big big.squashfs small
root@xenial-32:~# mkdir a
root@xenial-32:~# cd a
root@xenial-32:~/a# unsquashfs ../big.squashfs
Parallel unsquashfs: Using 8 processors
2 inodes (17200 blocks) to write
[====== ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======= ======| ] 17200/17200 100%
created 2 files
created 1 directories
created 0 symlinks
created 0 devices
created 0 fifos
```