Cinder+LVM slower then expected

Bug #1544017 reported by Fabrizio Soppelsa
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Fuel for OpenStack
In Progress
High
Michael Semenov
7.0.x
Won't Fix
Medium
MOS Cinder
8.0.x
Won't Fix
Medium
MOS Cinder
Mitaka
Fix Released
High
Michael Semenov

Bug Description

POC on MOS 7.0
Deployment: Default options, Neutron VLAN and Cinder over LVM
Measurements: base benchmarking with dd, time, and other system tools.

As for snapshot, we have one but it's not reflecting the latest redeployment with LVM. Its size is around 2.4G. Available upon request (to Fabrizio) in case of need.

3x controllers Supermicro
- 1CPU 16 x 2.40 GHz Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
- Memory 1 x 4.0 MB, 12 x 8.0 GB, 96.0 GB total
- Disks 3 drives, 60.0 TB total: model MR9266-8i
- Interfaces 1 x 1.0 Gbps, 11 x N/A
- eth0: admin, public, mgmt
- eth6: private
- eth7: storage speed N/A (probably 10G)

8x compute+storage Supermicro
- Hardware and setup same as of above controllers

A simple dd write test on one of the physical servers yields 1.6GBps.
The same test yields 120MBps on an instance running on the same server.

Raw results:

root@cinder-compute:~# dd if=/dev/zero of=/var/lib/nova/testfile bs=1M
count=10000 oflag=direct
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 6.09593 s, 1.7 GB/s

or 13.6 Gbps.

[root@instance ~]# dd if=/dev/zero of=/d1/testfile bs=1M count=10000
oflag=direct
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 41.92 s, 250 MB/s

or 2 Gbps.

Host and instance, other test:

[root@p4-vmhost ~]# dd if=/dev/zero of=/home/testfile bs=1G count=10 oflag=direct
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 6.11967 s, 1.8 GB/s

[root@p4-vr1 ~]# dd if=/dev/zero of=/d1/testfile bs=1G count=10 oflag=direct
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 8.04445 s, 1.3 GB/s

[root@video6 ~]# dd if=/dev/zero of=/d1/testfile bs=1M count=1000000 oflag=direct
^C668768+0 records in
668768+0 records out
701254074368 bytes (701 GB) copied, 2919.03 s, 240 MB/s

On the storage server, running dstat:

root@node-8:~# dstat -all -N br-storage
----total-cpu-usage---- -dsk/total- net/br-stor ---paging-- ---system-- ---load-avg--- ---load-avg---
usr sys idl wai hiq siq| read writ| recv send| in out | int csw | 1m 5m 15m | 1m 5m 15m
1 1 98 0 0 0| 176k 7751k| 0 0 | 0 6B|7650 13k|0.92 1.06 1.17|0.92 1.06 1.17
2 5 92 0 0 1| 80k 874M| 208M 999k| 0 0 | 52k 30k|0.92 1.06 1.17|0.92 1.06 1.17
2 4 93 0 0 2| 32k 226M| 232M 1223k| 0 0 | 55k 30k|0.92 1.06 1.17|0.92 1.06 1.17
1 4 94 0 0 1| 16k 20k| 240M 1159k| 0 0 | 51k 28k|0.92 1.06 1.17|0.92 1.06 1.17
2 5 91 0 0 2| 0 16k| 230M 1173k| 0 0 | 54k 29k|0.92 1.06 1.17|0.92 1.06 1.17
1 4 94 0 0 1| 0 0 | 240M 1117k| 0 0 | 52k 29k|0.85 1.04 1.16|0.85 1.04 1.16
1 3 94 0 0 1| 240k 280k| 227M 837k| 0 0 | 45k 29k|0.85 1.04 1.16|0.85 1.04 1.16
2 6 89 0 0 3| 16k 1264M| 224M 867k| 0 0 | 51k 27k|0.85 1.04 1.16|0.85 1.04 1.16
2 3 94 0 0 1| 0 0 | 238M 993k| 0 0 | 46k 23k|0.85 1.04 1.16|0.85 1.04 1.16
1 4 94 0 0 1| 0 0 | 242M 993k| 0 0 | 48k 27k|0.85 1.04 1.16|0.85 1.04 1.16
2 3 94 0 0 1| 0 16k| 241M 1040k| 0 0 | 50k 30k|0.78 1.02 1.16|0.78 1.02 1.16
2 5 92 0 0 1| 40k 76k| 212M 812k| 0 0 | 44k 28k|0.78 1.02 1.16|0.78 1.02 1.16
2 6 91 0 0 2| 24k 1224M| 228M 929k| 0 0 | 52k 28k|0.78 1.02 1.16|0.78 1.02 1.16
1 3 95 0 0 1| 0 0 | 216M 934k| 0 0 | 47k 28k|0.78 1.02 1.16|0.78 1.02 1.16
2 3 93 0 0 1| 0 0 | 242M 1120k| 0 0 | 52k 28k|0.78 1.02 1.16|0.78 1.02 1.16
1 3 95 0 0 1| 0 0 | 237M 1093k| 0 0 | 50k 28k|0.72 1.00 1.15|0.72 1.00 1.15
2 2 94 0 0 1| 40k 80k| 229M 947k| 0 0 | 43k 23k|0.72 1.00 1.15|0.72 1.00 1.15
3 8 88 0 0 1| 24k 362M| 248M 972k| 0 0 | 49k 23k|0.72 1.00 1.15|0.72 1.00 1.15
5 10 84 0 0 2| 0 870M| 250M 1173k| 0 0 | 57k 29k|0.72 1.00 1.15|0.72 1.00 1.15
4 9 86 0 0 1| 0 60k| 241M 1055k| 0 0 | 48k 29k|0.72 1.00 1.15|0.72 1.00 1.15
4 4 90 0 0 1| 0 0 | 218M 974k| 0 0 | 46k 27k|0.74 1.00 1.15|0.74 1.00 1.15
4 5 89 1 0 2|6964k 72k| 243M 1020k| 0 0 | 54k 35k|0.74 1.00 1.15|0.74 1.00 1.15
3 4 92 0 0 2| 24k 196k| 246M 898k| 0 0 | 47k 28k|0.74 1.00 1.15|0.74 1.00 1.15
2 7 89 0 0 2| 0 1264M| 244M 900k| 0 0 | 52k 29k|0.74 1.00 1.15|0.74 1.00 1.15

iperf on both servers, there is almost full line rate:

[ 3] 8.0-10.0 sec 2.19 GBytes 9.39 Gbits/sec
[ 3] 10.0-12.0 sec 2.19 GBytes 9.39 Gbits/sec
[ 3] 12.0-14.0 sec 2.19 GBytes 9.39 Gbits/sec
[ 3] 14.0-16.0 sec 2.19 GBytes 9.39 Gbits/sec
[ 3] 16.0-18.0 sec 2.19 GBytes 9.39 Gbits/sec

Revision history for this message
Fabrizio Soppelsa (fsoppelsa) wrote :
summary: - Cinder+LVM is 4-5 times slower then expected
+ Cinder+LVM slower then expected
Revision history for this message
Fabrizio Soppelsa (fsoppelsa) wrote :
Ilya Kutukov (ikutukov)
Changed in fuel:
milestone: none → 7.0-updates
tags: added: area-mos cinder
Changed in fuel:
assignee: nobody → MOS Cinder (mos-cinder)
importance: Undecided → High
status: New → Confirmed
assignee: MOS Cinder (mos-cinder) → MOS Maintenance (mos-maintenance)
Dina Belova (dbelova)
no longer affects: fuel/future
Revision history for this message
Ivan Kolodyazhny (e0ne) wrote :

I've got the similar results in my virtual lab.

Revision history for this message
Ivan Kolodyazhny (e0ne) wrote :

From the IRC about LIO:
"[21:03:34] <eharney> e0ne: right, should be far less data being transferred between userspace and kernel space"

I've tested LVM-based volume performance with LIO target on my virtualized all-in-one env. The results are good, I have to check it with MOS and test on some hardware lab.

Revision history for this message
Ivan Kolodyazhny (e0ne) wrote :

tgtd eats CPU a lot. With LIO I've got the same results for ephemeral and LVM-based storages inside VM. With tgtd volume performance was ~30-40% slower on my env:

ubuntu@test-ubuntu-vm:~$ dd if=/dev/zero of=/home/ubuntu/testfile bs=1M count=500
500+0 records in
500+0 records out
524288000 bytes (524 MB) copied, 13.3723 s, 39.2 MB/s

ubuntu@test-ubuntu-vm:~$ sudo dd if=/dev/zero of=/mnt/testfile bs=1M count=500
500+0 records in
500+0 records out
524288000 bytes (524 MB) copied, 13.3034 s, 39.4 MB/s

NOTE: don't care about speed (39.4 MB/s). My lab is not very fast. It only shows ratio between LVM+LIO and ephemeral storage. This ratio will be similar on the faster envs

Revision history for this message
Ivan Kolodyazhny (e0ne) wrote :

I've tried different methods of tuning LVM+tgtd (E.g.: http://www.monperrus.net/martin/performance+of+read-write+throughput+with+iscsi). The results are better a bit, but not much better. LIO target provides a really good performance for LVM-based volumes.

I'm waiting for results from hardware lab to provide performance tests results

Revision history for this message
Ivan Kolodyazhny (e0ne) wrote :

There are results for tgtd[1] and lio[2] targets. I propose to change target to LIO to have a better LVM performance

[1] http://paste.openstack.org/show/493982/
[2] http://paste.openstack.org/show/493981/

Dmitry Pyzhov (dpyzhov)
Changed in fuel:
milestone: 9.0 → 10.0
Revision history for this message
Ivan Kolodyazhny (e0ne) wrote :

Michael, please, confirm performance improvements using LIO target

Revision history for this message
Michael Semenov (msemenov) wrote :
Ivan Kolodyazhny (e0ne)
Changed in fuel:
assignee: Ivan Kolodyazhny (e0ne) → Michael Semenov (msemenov)
Revision history for this message
Fabrizio Soppelsa (fsoppelsa) wrote :

Good, is it a configuration we can made default from 9 or 10.0?

Revision history for this message
Michael Semenov (msemenov) wrote :

Results for 9.0
https://mirantis.jira.com/wiki/pages/viewpage.action?pageId=266471783

Testing is done for 9.0, so moving to Fix Released

Revision history for this message
Denis Meltsaykin (dmeltsaykin) wrote :

Won't Fix for updates as this is not a bug but a feature request.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.