Ubuntu 18.04 - vm disk i/o performance issue when using file system passthrough
| Affects | Status | Importance | Assigned to | Milestone | |
|---|---|---|---|---|---|
| The Ubuntu-power-systems project |
Invalid
|
Medium
|
Canonical Server | ||
| qemu (Ubuntu) |
Invalid
|
Undecided
|
Ubuntu on IBM Power Systems Bug Triage | ||
Bug Description
== Comment: #0 - I-HSIN CHUNG <email address hidden> - 2019-11-15 12:35:05 ==
---Problem Description---
Ubuntu 18.04 - vm disk i/o performance issue when using file system passthrough
Contact Information = <email address hidden>
---uname output---
Linux css-host-22 4.15.0-1039-ibm-gt #41-Ubuntu SMP Wed Oct 2 10:52:25 UTC 2019 ppc64le ppc64le ppc64le GNU/Linux (host) Linux ubuntu 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:08:54 UTC 2019 ppc64le ppc64le ppc64le GNU/Linux (vm)
Machine Type = p9/ac922
---Debugger---
A debugger is not configured
---Steps to Reproduce---
1. Env: Ubuntu 18.04.3 LTS?Genesis kernel linux-ibm-gt - 4.15.0-1039.41?qemu 1:2.11+
2. execute run.sh to run fio benchmark:
2.1) run.sh:
#!/bin/bash
for bs in 4k 16m
do
for rwmixread in 0 25 50 75 100
do
for numjobs in 1 4 16 64
do
echo ./fio j1.txt --bs=$bs --rwmixread=
./fio j1.txt --bs=$bs --rwmixread=
done
done
done
2.2) j1.txt:
[global]
direct=1
rw=randrw
refill_buffers
norandommap
randrepeat=0
ioengine=libaio
iodepth=64
runtime=60
allow_mounted_
[job2]
new_group
filename=/dev/vdb
filesize=1000g
cpus_allowed=0-63
numa_cpu_nodes=0
numa_mem_
3. performance profile:
device passthrough performance for the nvme:
http://
file system passthrough
http://
desired performance when using file system passthrough should be similar to the device passthrough
Userspace tool common name: fio
The userspace tool has the following bit modes: should be 64 bit
Userspace rpm: ?
Userspace tool obtained from project website: na
*Additional Instructions for <email address hidden>:
-Post a private note with access information to the machine that the bug is occuring on.
-Attach ltrace and strace of userspace application.
| tags: | added: architecture-ppc64le bugnameltc-182496 severity-medium targetmilestone-inin18045 |
| Changed in ubuntu: | |
| assignee: | nobody → Ubuntu on IBM Power Systems Bug Triage (ubuntu-power-triage) |
| affects: | ubuntu → qemu (Ubuntu) |
| no longer affects: | qemu |
| Changed in ubuntu-power-systems: | |
| assignee: | nobody → Canonical Server Team (canonical-server) |
| importance: | Undecided → Medium |
| Changed in ubuntu-power-systems: | |
| status: | New → Incomplete |

Hi,
Let me provide my expectations (which are different):
You said "desired performance when using file system passthrough should be similar to the device passthrough"
IMHO that isn’t right - it might be "desired" but unrealistic to "be expected"
Usually you have a hierarchy:
1. device passthrough
2. using block devices
3. using images on Host Filesystem
4. using images on semi-remote cluster filesystems
(and a few special cases in between)
Those usually from 1->4 are decreasing performance but increasing flexibility and manageability.
So I wanted to give a heads up based on the initial report that eventually this might end up as "please adjust expectations"
---
That said, let’s focus on what your setup actually looks like and if there are obvious improvements or hidden bugs.
Unfortunately "file system passthrough" isn't a clearly defined thing.
Could you:
1) outline which disk storage you attached to the host
2) which filesystem is on that storage
3) how you are passing files and/or images to the guest
Please explain the questions above and attach libvirts guest xml of both of your test cases.
P.S. nvme passthrough will soon become even faster on ppc64el due to the fix for bug LP 1847948