Cinder+LVM slower then expected
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
In Progress
|
High
|
Michael Semenov | ||
7.0.x |
Won't Fix
|
Medium
|
MOS Cinder | ||
8.0.x |
Won't Fix
|
Medium
|
MOS Cinder | ||
Mitaka |
Fix Released
|
High
|
Michael Semenov |
Bug Description
POC on MOS 7.0
Deployment: Default options, Neutron VLAN and Cinder over LVM
Measurements: base benchmarking with dd, time, and other system tools.
As for snapshot, we have one but it's not reflecting the latest redeployment with LVM. Its size is around 2.4G. Available upon request (to Fabrizio) in case of need.
3x controllers Supermicro
- 1CPU 16 x 2.40 GHz Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
- Memory 1 x 4.0 MB, 12 x 8.0 GB, 96.0 GB total
- Disks 3 drives, 60.0 TB total: model MR9266-8i
- Interfaces 1 x 1.0 Gbps, 11 x N/A
- eth0: admin, public, mgmt
- eth6: private
- eth7: storage speed N/A (probably 10G)
8x compute+storage Supermicro
- Hardware and setup same as of above controllers
A simple dd write test on one of the physical servers yields 1.6GBps.
The same test yields 120MBps on an instance running on the same server.
Raw results:
root@cinder-
count=10000 oflag=direct
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 6.09593 s, 1.7 GB/s
or 13.6 Gbps.
[root@instance ~]# dd if=/dev/zero of=/d1/testfile bs=1M count=10000
oflag=direct
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 41.92 s, 250 MB/s
or 2 Gbps.
Host and instance, other test:
[root@p4-vmhost ~]# dd if=/dev/zero of=/home/testfile bs=1G count=10 oflag=direct
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 6.11967 s, 1.8 GB/s
[root@p4-vr1 ~]# dd if=/dev/zero of=/d1/testfile bs=1G count=10 oflag=direct
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 8.04445 s, 1.3 GB/s
[root@video6 ~]# dd if=/dev/zero of=/d1/testfile bs=1M count=1000000 oflag=direct
^C668768+0 records in
668768+0 records out
701254074368 bytes (701 GB) copied, 2919.03 s, 240 MB/s
On the storage server, running dstat:
root@node-8:~# dstat -all -N br-storage
----total-
usr sys idl wai hiq siq| read writ| recv send| in out | int csw | 1m 5m 15m | 1m 5m 15m
1 1 98 0 0 0| 176k 7751k| 0 0 | 0 6B|7650 13k|0.92 1.06 1.17|0.92 1.06 1.17
2 5 92 0 0 1| 80k 874M| 208M 999k| 0 0 | 52k 30k|0.92 1.06 1.17|0.92 1.06 1.17
2 4 93 0 0 2| 32k 226M| 232M 1223k| 0 0 | 55k 30k|0.92 1.06 1.17|0.92 1.06 1.17
1 4 94 0 0 1| 16k 20k| 240M 1159k| 0 0 | 51k 28k|0.92 1.06 1.17|0.92 1.06 1.17
2 5 91 0 0 2| 0 16k| 230M 1173k| 0 0 | 54k 29k|0.92 1.06 1.17|0.92 1.06 1.17
1 4 94 0 0 1| 0 0 | 240M 1117k| 0 0 | 52k 29k|0.85 1.04 1.16|0.85 1.04 1.16
1 3 94 0 0 1| 240k 280k| 227M 837k| 0 0 | 45k 29k|0.85 1.04 1.16|0.85 1.04 1.16
2 6 89 0 0 3| 16k 1264M| 224M 867k| 0 0 | 51k 27k|0.85 1.04 1.16|0.85 1.04 1.16
2 3 94 0 0 1| 0 0 | 238M 993k| 0 0 | 46k 23k|0.85 1.04 1.16|0.85 1.04 1.16
1 4 94 0 0 1| 0 0 | 242M 993k| 0 0 | 48k 27k|0.85 1.04 1.16|0.85 1.04 1.16
2 3 94 0 0 1| 0 16k| 241M 1040k| 0 0 | 50k 30k|0.78 1.02 1.16|0.78 1.02 1.16
2 5 92 0 0 1| 40k 76k| 212M 812k| 0 0 | 44k 28k|0.78 1.02 1.16|0.78 1.02 1.16
2 6 91 0 0 2| 24k 1224M| 228M 929k| 0 0 | 52k 28k|0.78 1.02 1.16|0.78 1.02 1.16
1 3 95 0 0 1| 0 0 | 216M 934k| 0 0 | 47k 28k|0.78 1.02 1.16|0.78 1.02 1.16
2 3 93 0 0 1| 0 0 | 242M 1120k| 0 0 | 52k 28k|0.78 1.02 1.16|0.78 1.02 1.16
1 3 95 0 0 1| 0 0 | 237M 1093k| 0 0 | 50k 28k|0.72 1.00 1.15|0.72 1.00 1.15
2 2 94 0 0 1| 40k 80k| 229M 947k| 0 0 | 43k 23k|0.72 1.00 1.15|0.72 1.00 1.15
3 8 88 0 0 1| 24k 362M| 248M 972k| 0 0 | 49k 23k|0.72 1.00 1.15|0.72 1.00 1.15
5 10 84 0 0 2| 0 870M| 250M 1173k| 0 0 | 57k 29k|0.72 1.00 1.15|0.72 1.00 1.15
4 9 86 0 0 1| 0 60k| 241M 1055k| 0 0 | 48k 29k|0.72 1.00 1.15|0.72 1.00 1.15
4 4 90 0 0 1| 0 0 | 218M 974k| 0 0 | 46k 27k|0.74 1.00 1.15|0.74 1.00 1.15
4 5 89 1 0 2|6964k 72k| 243M 1020k| 0 0 | 54k 35k|0.74 1.00 1.15|0.74 1.00 1.15
3 4 92 0 0 2| 24k 196k| 246M 898k| 0 0 | 47k 28k|0.74 1.00 1.15|0.74 1.00 1.15
2 7 89 0 0 2| 0 1264M| 244M 900k| 0 0 | 52k 29k|0.74 1.00 1.15|0.74 1.00 1.15
iperf on both servers, there is almost full line rate:
[ 3] 8.0-10.0 sec 2.19 GBytes 9.39 Gbits/sec
[ 3] 10.0-12.0 sec 2.19 GBytes 9.39 Gbits/sec
[ 3] 12.0-14.0 sec 2.19 GBytes 9.39 Gbits/sec
[ 3] 14.0-16.0 sec 2.19 GBytes 9.39 Gbits/sec
[ 3] 16.0-18.0 sec 2.19 GBytes 9.39 Gbits/sec
Changed in fuel: | |
milestone: | none → 7.0-updates |
tags: | added: area-mos cinder |
Changed in fuel: | |
assignee: | nobody → MOS Cinder (mos-cinder) |
importance: | Undecided → High |
status: | New → Confirmed |
assignee: | MOS Cinder (mos-cinder) → MOS Maintenance (mos-maintenance) |
no longer affects: | fuel/future |
Changed in fuel: | |
milestone: | 9.0 → 10.0 |
Changed in fuel: | |
assignee: | Ivan Kolodyazhny (e0ne) → Michael Semenov (msemenov) |
I've got the similar results in my virtual lab.