I upgraded my "NVME" cluster (26 OSDs 2.5GBE network) today and pasting the test results before and after the upgrade. Please note that all MON units were rebooted after the upgrade and all OSD services were also restarted on the OSD units. This cluster does not have any active I/O going on. Unlike the Quincy cluster, the performance gain is not much observable here:
16.2.11 (Take 1)
----------------
Run status group 0 (all jobs):
READ: bw=197MiB/s (207MB/s), 127KiB/s-27.0MiB/s (130kB/s-28.3MB/s), io=348GiB (373GB), run=1800279-1803879msec
WRITE: bw=137MiB/s (143MB/s), 42.0KiB/s-23.6MiB/s (43.0kB/s-24.7MB/s), io=241GiB (258GB), run=1800279-1803431msec
16.2.11 (Take 2)
----------------
Run status group 0 (all jobs):
READ: bw=207MiB/s (217MB/s), 113KiB/s-27.8MiB/s (116kB/s-29.2MB/s), io=365GiB (392GB), run=1800132-1805327msec
WRITE: bw=117MiB/s (123MB/s), 37.6KiB/s-25.8MiB/s (38.5kB/s-27.1MB/s), io=207GiB (222GB), run=1801581-1805342msec
16.2.13 (Take 1)
----------------
Run status group 0 (all jobs):
READ: bw=199MiB/s (208MB/s), 138KiB/s-27.1MiB/s (141kB/s-28.4MB/s), io=350GiB (376GB), run=1800388-1803945msec
WRITE: bw=121MiB/s (127MB/s), 45.7KiB/s-19.5MiB/s (46.8kB/s-20.5MB/s), io=214GiB (229GB), run=1800507-1803378msec
16.2.13 (Take 2)
----------------
Run status group 0 (all jobs):
READ: bw=173MiB/s (182MB/s), 123KiB/s-27.7MiB/s (126kB/s-29.1MB/s), io=305GiB (327GB), run=1800106-1802642msec
WRITE: bw=123MiB/s (129MB/s), 40.8KiB/s-19.6MiB/s (41.7kB/s-20.6MB/s), io=216GiB (232GB), run=1800290-1802642msec
tcmalloc is linked and being used (From an OSD unit):
I upgraded my "NVME" cluster (26 OSDs 2.5GBE network) today and pasting the test results before and after the upgrade. Please note that all MON units were rebooted after the upgrade and all OSD services were also restarted on the OSD units. This cluster does not have any active I/O going on. Unlike the Quincy cluster, the performance gain is not much observable here:
16.2.11 (Take 1) 1803879msec s-24.7MB/ s), io=241GiB (258GB), run=1800279- 1803431msec
----------------
Run status group 0 (all jobs):
READ: bw=197MiB/s (207MB/s), 127KiB/s-27.0MiB/s (130kB/s-28.3MB/s), io=348GiB (373GB), run=1800279-
WRITE: bw=137MiB/s (143MB/s), 42.0KiB/s-23.6MiB/s (43.0kB/
16.2.11 (Take 2) 1805327msec s-27.1MB/ s), io=207GiB (222GB), run=1801581- 1805342msec
----------------
Run status group 0 (all jobs):
READ: bw=207MiB/s (217MB/s), 113KiB/s-27.8MiB/s (116kB/s-29.2MB/s), io=365GiB (392GB), run=1800132-
WRITE: bw=117MiB/s (123MB/s), 37.6KiB/s-25.8MiB/s (38.5kB/
16.2.13 (Take 1) 1803945msec s-20.5MB/ s), io=214GiB (229GB), run=1800507- 1803378msec
----------------
Run status group 0 (all jobs):
READ: bw=199MiB/s (208MB/s), 138KiB/s-27.1MiB/s (141kB/s-28.4MB/s), io=350GiB (376GB), run=1800388-
WRITE: bw=121MiB/s (127MB/s), 45.7KiB/s-19.5MiB/s (46.8kB/
16.2.13 (Take 2) 1802642msec s-20.6MB/ s), io=216GiB (232GB), run=1800290- 1802642msec
----------------
Run status group 0 (all jobs):
READ: bw=173MiB/s (182MB/s), 123KiB/s-27.7MiB/s (126kB/s-29.1MB/s), io=305GiB (327GB), run=1800106-
WRITE: bw=123MiB/s (129MB/s), 40.8KiB/s-19.6MiB/s (41.7kB/
tcmalloc is linked and being used (From an OSD unit):
$ sudo ps aux | grep ceph-osd
ubuntu 13521 0.0 0.0 8164 660 pts/2 S+ 21:49 0:00 grep --color=auto ceph-osd
ceph 1963802 6.1 5.1 1536184 842856 ? Ssl 20:46 3:53 /usr/bin/ceph-osd -f --cluster ceph --id 24 --setuser ceph --setgroup ceph
ceph 1963806 6.4 5.6 1604864 918996 ? Ssl 20:46 4:03 /usr/bin/ceph-osd -f --cluster ceph --id 25 --setuser ceph --setgroup ceph
$ sudo fuser /lib/x86_ 64-linux- gnu/libtcmalloc .so.4 x86_64- linux-gnu/ libtcmalloc. so.4.5. 3: 1963802m 1963806m
/usr/lib/