qemu is very slow when adding 16,384 virtio-scsi drives
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
QEMU |
Expired
|
Undecided
|
Unassigned |
Bug Description
qemu runs very slowly when adding many virtio-scsi drives. I have attached a small reproducer shell script which demonstrates this.
Using perf shows the following stack trace taking all the time:
72.42% 71.15% qemu-system-x86 qemu-system-x86_64 [.] drive_get
|
21.70% 21.34% qemu-system-x86 qemu-system-x86_64 [.] blk_legacy_dinfo
|
3.65% 3.59% qemu-system-x86 qemu-system-x86_64 [.] blk_next
|
The first place where it ages an insane amount of time is simply processing -drive options. The stack trace I see is this
(gdb) bt entry=IF_ NONE, bus=bus@entry=0, unit=unit@ entry=2313) at blockdev.c:223 0x5583b890e080, block_default_ type=<optimized out>) at blockdev.c:996 0x5583b9980030, errp=0x0) at util/qemu- option. c:1114
#0 0x00005583b596719a in drive_get (type=type@
#1 0x00005583b59679bd in drive_new (all_opts=
#2 0x00005583b5971641 in drive_init_func (opaque=<optimized out>, opts=<optimized out>, errp=<optimized out>)
at vl.c:1154
#3 0x00005583b5c1149a in qemu_opts_foreach (list=<optimized out>, func=0x5583b5971630 <drive_init_func>, opaque=
#4 0x00005583b5830d30 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4499
We're iterating over every -drive option. Now because we're using if=none, and thus unit==0, line 996 of blockdev.c looks calling drive_get() until we find a matching drive, in order to identify the unit number. So we have a loop over every drive, calling drive_new which loops over every drive calling drive_get which loops over every drive. So about O(N*N*N)