qemu may freeze during drive-mirroring on fragmented FS
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
QEMU |
Expired
|
Undecided
|
Unassigned |
Bug Description
We have odd behavior in operation where qemu freeze during long
seconds, We started an thread about that issue here:
https:/
It happens at least during openstack nova snapshot (qemu blockdev-mirror)
or live block migration(which include network copy of disk).
After further troubleshoots, it seems related to FS fragmentation on host.
reproducible at least on:
Ubuntu 18.04.3/
Ubuntu 16.04.6/
# Lets create a dedicated file system on a SSD/Nvme 60GB disk in my case:
$sudo mkfs.ext4 /dev/sda3
$sudo mount /dev/sda3 /mnt
$df -h /mnt
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 59G 53M 56G 1% /mnt
#Create a fragmented disk on it using 2MB Chunks (about 30min):
$sudo python3 create_
Filling up FS by Creating chunks files in: /mnt/chunks
We are probably full as expected!!: [Errno 28] No space left on device
Creating fragged disk file: /mnt/disk
$ls -lhs
59G -rw-r--r-- 1 root root 59G Jan 15 14:08 /mnt/disk
$ sudo e4defrag -c /mnt/disk
Total/best extents 41971/30
Average size per extent 1466 KB
Fragmentation score 2
[0-30 no problem: 31-55 a little bit fragmented: 56- needs defrag]
This file (/mnt/disk) does not need defragmentation.
Done.
# the tool^^^ says it is not enough fragmented to be able to defrag.
#Inject an image on fragmented disk
sudo chown ubuntu /mnt/disk
wget https:/
qemu-img convert -O raw bionic-
dd conv=notrunc iflag=fullblock if=bionic-
virt-customize -a /mnt/disk --root-password password:xxxx
# logon run console activity ex: ping -i 0.3 127.0.0.1
$qemu-system-x86_64 -m 2G -enable-kvm -nographic \
-chardev socket,
-mon chardev=
-drive file=/mnt/
-device virtio-
$sync
$echo 3 | sudo tee -a /proc/sys/
#start drive-mirror via qmp on another SSD/nvme partition
nc -U /tmp/qmp-monitor
{"execute"
{"execute"
^^^ qemu console may start to freeze at this step.
NOTE:
- smaller chunk sz and bigger disk size the worst it is.
In operation we also have issue on 400GB disk size with average 13MB/extent
- Reproducible also on xfs
Expected behavior:
-------------------
QEMU should remain steady, eventually only have decrease storage Performance
or mirroring, because of fragmented fs.
Observed behavior:
-------------------
Perf of mirroring is still quite good even on fragmented FS,
but it breaks qemu.
#######
import sys
import os
import tempfile
import glob
import errno
MNT_DIR = sys.argv[1]
CHUNK_SZ_MB = int(sys.argv[2])
CHUNKS_DIR = MNT_DIR + '/chunks'
DISK_FILE = MNT_DIR + '/disk'
if not os.path.
os.
with open("/
mb_
print("Filling up FS by Creating chunks files in: ",CHUNKS_DIR)
try:
while True:
tp = tempfile.
for x in range(CHUNK_SZ_MB):
tp.close()
except Exception as ex:
print("We are probably full as expected!!: ",ex)
chunks = glob.glob(
print("Creating fragged disk file: ",DISK_FILE)
with open(DISK_FILE, "w+b") as f_disk:
for chunk in chunks:
try:
for x in range(CHUNK_SZ_MB):
except IOError as ex:
if ex.errno != errno.ENOSPC:
#######
Seems a regression introduce in commit : (2.7.0)
commit 0965a41e998ab82 0b5d660c8abfc8c 819c97bc1b
Author: Vladimir Sementsov-Ogievskiy <email address hidden>
Date: Thu Jul 14 20:19:01 2016 +0300
mirror: double performance of the bulk stage if the disc is full
Mirror can do up to 16 in-flight requests, but actually on full copy
(the whole source disk is non-zero) in-flight is always 1. This happens
as the request is not limited in size: the data occupies maximum available
capacity of s->buf.
The patch limits the size of the request to some artificial constant
(1 Mb here), which is not that big or small. This effectively enables
back parallelism in mirror code as it was designed.
The result is important: the time to migrate 10 Gb disk is reduced from
~350 sec to 170 sec.
If I revert this commit on master on top of (fef80ea073c48) (minor conflicts) issue disappears.
on a fragmented file (average size per exent 2125 KB) of 58GB:
upstream 207s (285 MB/s) instance is broken during mirroring
upstream with revert 225s (262 MB/s) instance is fine.
not fully tested, but the patch probably improve perf in other cases
(as disccuss in git message), maybe just need to improve to avoid the freeze?