ceph should build linked with tcmalloc

Bug #2016845 reported by Chris MacNaughton
26
This bug affects 4 people
Affects Status Importance Assigned to Milestone
Ubuntu Cloud Archive
Invalid
Undecided
Unassigned
Wallaby
Fix Released
Critical
Unassigned
Xena
Fix Released
Critical
Unassigned
ceph (Ubuntu)
Fix Released
Undecided
Unassigned

Bug Description

Sometime between 15.2.17 on focal and 17.2.5 on either focal via the Yoga UCA or on Jammy the ceph-osd process stopped linking to tcmalloc. tcmalloc should continue to be enabled for ceph performance

Revision history for this message
James Page (james-page) wrote :

Did a quick check on the build log - gperftools is detected:

-- Found gperftools: /usr/include (found suitable version "2.9.1", minimum required is "2.6.2")

but I agree that the allocator is then not correctly linked to for the binary packages.

Revision history for this message
Ponnuvel Palaniyappan (pponnuvel) wrote :

Looks like this is an issue in Pacific too.

```
root@juju-8e65ca-pacific-2:/home/ubuntu# dpkg -l | grep ceph
ii ceph 16.2.11-0ubuntu0.21.10.1~cloud0 amd64 distributed storage and file system
ii ceph-base 16.2.11-0ubuntu0.21.10.1~cloud0 amd64 common ceph daemon libraries and management tools
ii ceph-common 16.2.11-0ubuntu0.21.10.1~cloud0 amd64 common utilities to mount and interact with a ceph storage cluster
ii ceph-mds 16.2.11-0ubuntu0.21.10.1~cloud0 amd64 metadata server for the ceph distributed file system
ii ceph-mgr 16.2.11-0ubuntu0.21.10.1~cloud0 amd64 manager for the ceph distributed file system
ii ceph-mgr-modules-core 16.2.11-0ubuntu0.21.10.1~cloud0 all ceph manager modules which are always enabled
ii ceph-mon 16.2.11-0ubuntu0.21.10.1~cloud0 amd64 monitor server for the ceph storage system
ii ceph-osd 16.2.11-0ubuntu0.21.10.1~cloud0 amd64 OSD server for the ceph storage system
ii libcephfs2 16.2.11-0ubuntu0.21.10.1~cloud0 amd64 Ceph distributed file system client library
ii libsqlite3-mod-ceph 16.2.11-0ubuntu0.21.10.1~cloud0 amd64 SQLite3 VFS for Ceph
ii python3-ceph-argparse 16.2.11-0ubuntu0.21.10.1~cloud0 amd64 Python 3 utility libraries for Ceph CLI
ii python3-ceph-common 16.2.11-0ubuntu0.21.10.1~cloud0 all Python 3 utility libraries for Ceph
ii python3-cephfs 16.2.11-0ubuntu0.21.10.1~cloud0 amd64 Python 3 libraries for the Ceph libcephfs library
root@juju-8e65ca-pacific-2:/home/ubuntu# ceph daemon osd.1 version
{
    "version": "16.2.11",
    "release": "pacific",
    "release_type": "stable"
}
root@juju-8e65ca-pacific-2:/home/ubuntu# ldd $(which ceph-osd) | grep tcmalloc
root@juju-8e65ca-pacific-2:/home/ubuntu#

```

Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in ceph (Ubuntu):
status: New → Confirmed
Revision history for this message
Dan Hill (hillpd) wrote :

The Ceph CMake files are a bit of a tangled mess. The tcmalloc library was originally linked through an `${ALLOC_LIBS}` definition that was directly added to individual daemon / utility targets. Nautilus reworked how these libraries were linked by refactoring everything into bundles of common libraries [0].

As a result, ceph-osd links the tcmalloc library through several layers of CMake library dependencies:
> `ceph-osd` adds the `os` set of libraries
-> `os` adds the `heap_profiler` set of libraries
--> `heap_profiler` adds the `gperftools::tcmalloc` library

The core issue is that the `heap_profiler` definition removes tcmalloc when `WITH_SEASTAR` is defined [1]. Ubuntu packaging has been built `WITH_SEASTAR=ON` since 16.2.4-0ubuntu1 [2].

More recently, upstream also broke tcmalloc while enabling seastar. They have proposed a solution that switches back to linking individual targets with `${ALLOC_LIBS}` [3]. This fix has been merged into master, with backports to quincy and pacific in progress [4].

[0] https://github.com/ceph/ceph/pull/22990/commits/2e012870315e5a08e90c27f14666e8518b94caef
[1] https://github.com/ceph/ceph/commit/028159ef6ea0f43c4eaf2314d7d73f92bbc5bd99
[2] https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1928645
[3] https://github.com/ceph/ceph/pull/46103/commits/1e3d2b53cfeeff4bac2f384c6445b65eb9e8396f
[4] https://tracker.ceph.com/issues/55519

Revision history for this message
Ponnuvel Palaniyappan (pponnuvel) wrote :

Fix for [0] is already in Quincy 17.2.6.

Test PPA [1] which includes just that patch links tcmalloc again. SRU'ing it or releasing 17.2.6 for Ubuntu is liekly enough to solve this.

[0] https://tracker.ceph.com/issues/55519
[1] https://launchpad.net/~gerald-yang-tw/+archive/ubuntu/359159

Revision history for this message
Alan Baghumian (alanbach) wrote :

Any plans to release updated packages to fix tcmalloc linking issues for Pacific?

Revision history for this message
Alan Baghumian (alanbach) wrote :

I went ahead an ran a FIO test with the current 17.2.5 packages against my home lab Ceph cluster. I'm pasting the results here so we can compare with 17.2.6 later on.

Some notes:

1. This cluster is only being used by a single OpenStack instance and did not have any I/O activity during the tests.
2. The cluster has 6 nodes, 6 OSD(s), each 300GB for a total raw size of 1.8TB.
3. The network is a 1GBps network shared with the OpenStack cluster.
4. The memory target for OSD(s) is set to 1GB.
5. Each node has 4GB of memory and 2 CPU cores.
6. The tests were executed from inside an OpenStack instance on a Ceph backed volume.

Please see the attachments for the FIO test profile, results and cluster status.

Direct FIO tests against the pool will follow.

Revision history for this message
Alan Baghumian (alanbach) wrote :
Revision history for this message
Alan Baghumian (alanbach) wrote :
Revision history for this message
Alan Baghumian (alanbach) wrote :
Revision history for this message
Alan Baghumian (alanbach) wrote :
Revision history for this message
Ponnuvel Palaniyappan (pponnuvel) wrote :

Quincy 17.2.6 has the fix in jammy-proposed (https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/2018929).

The ceph binaries 17.2.6 packages do link with tcmalloc correctly.

Revision history for this message
Ponnuvel Palaniyappan (pponnuvel) wrote :

@Alan,

I've proposed backport to Pacific upstream: https://github.com/ceph/ceph/pull/51282

However, it hasn't got much attention. There's a possibility that there's one more (and final) Pacific point release from upstream before it goes EOL. If so, I'll try to get that patch into that. Otherwise we'll have to do an SRU on top of 16.2.13.

If you are simply looking to benchmark, I guess you can use 17.2.6 from jammy-proposed and compare the numbers against the data you have from 17.2.5.

Revision history for this message
Alan Baghumian (alanbach) wrote :

@Pon Is this something we can patch internally if there is no upstream attention?

My main Ceph cluster is running on Pacific - using 16.2.11 packages currently. Just for the sake of this testing, I ran FIO tests similar to the Quincy cluster on this one as well.

This is again from inside OpenStack, on a 500GB Ceph backed volume on the NVME cluster.

This one is smaller in size (1.1TB) however does use NVME backed virtual disks. I have 26 OSD(s), with 2GB target memory on each of them.

Attaching results and cluster details for reference.

Revision history for this message
Alan Baghumian (alanbach) wrote :
Revision history for this message
Alan Baghumian (alanbach) wrote :
Revision history for this message
Alan Baghumian (alanbach) wrote :
Revision history for this message
Alan Baghumian (alanbach) wrote :

I discovered this morning that my SSD cluster (6 OSDs) had inconsistencies with target memory. So I reset target memory on all OSD(s) to 1GB and ran the tests.

Then later on this afternoon, upgraded the packages on all MON/OSD units using jammy-prpoosed, rebooted them and ran the FIO tests again. Attaching the results.

Revision history for this message
Alan Baghumian (alanbach) wrote :
Revision history for this message
Alan Baghumian (alanbach) wrote :
Revision history for this message
Alan Baghumian (alanbach) wrote :

Please refer to the FIO profile for job mixture details - test duration was 30 minutes.

17.2.5
------
READ: bw=33.5MiB/s (35.1MB/s), 15.4KiB/s-5179KiB/s (15.8kB/s-5303kB/s), io=59.6GiB (64.0GB), run=1803434-1820709msec
WRITE: bw=26.6MiB/s (27.9MB/s), 5071B/s-7489KiB/s (5071B/s-7669kB/s), io=47.3GiB (50.8GB), run=1811019-1820307msec

17.2.6
------
READ: bw=39.7MiB/s (41.6MB/s), 20.3KiB/s-5745KiB/s (20.8kB/s-5883kB/s), io=70.7GiB (75.9GB), run=1801136-1824950msec
WRITE: bw=25.3MiB/s (26.5MB/s), 6818B/s-5598KiB/s (6818B/s-5732kB/s), io=45.1GiB (48.4GB), run=1802114-1824895msec

My "NVME" cluster is significantly faster than this one, that's why I really like to see updated Pacific packages to test on that cluster as well.

Revision history for this message
Alan Baghumian (alanbach) wrote :

Upgrade:

echo "deb http://archive.ubuntu.com/ubuntu/ jammy-proposed main" | sudo tee /etc/apt/sources.list.d/jammy-proposed.list && sudo apt-get update && sudo apt-get install -y ceph=17.2.6-0ubuntu0.22.04.1 ceph-base=17.2.6-0ubuntu0.22.04.1 ceph-common=17.2.6-0ubuntu0.22.04.1 ceph-mds=17.2.6-0ubuntu0.22.04.1 ceph-mgr=17.2.6-0ubuntu0.22.04.1 ceph-mgr-modules-core=17.2.6-0ubuntu0.22.04.1 ceph-mon=17.2.6-0ubuntu0.22.04.1 ceph-osd=17.2.6-0ubuntu0.22.04.1 ceph-volume=17.2.6-0ubuntu0.22.04.1 libcephfs2=17.2.6-0ubuntu0.22.04.1 librados2=17.2.6-0ubuntu0.22.04.1 libradosstriper1=17.2.6-0ubuntu0.22.04.1 librbd1=17.2.6-0ubuntu0.22.04.1 librgw2=17.2.6-0ubuntu0.22.04.1 libsqlite3-mod-ceph=17.2.6-0ubuntu0.22.04.1 python3-ceph-argparse=17.2.6-0ubuntu0.22.04.1 python3-ceph-common=17.2.6-0ubuntu0.22.04.1 python3-cephfs=17.2.6-0ubuntu0.22.04.1 python3-rados=17.2.6-0ubuntu0.22.04.1 python3-rbd=17.2.6-0ubuntu0.22.04.1 radosgw=17.2.6-0ubuntu0.22.04.1

Downgrade:

sudo apt-get update && sudo apt-get install -y --allow-downgrades ceph=17.2.5-0ubuntu0.22.04.3 ceph-base=17.2.5-0ubuntu0.22.04.3 ceph-common=17.2.5-0ubuntu0.22.04.3 ceph-mds=17.2.5-0ubuntu0.22.04.3 ceph-mgr=17.2.5-0ubuntu0.22.04.3 ceph-mgr-modules-core=17.2.5-0ubuntu0.22.04.3 ceph-mon=17.2.5-0ubuntu0.22.04.3 ceph-osd=17.2.5-0ubuntu0.22.04.3 ceph-volume=17.2.5-0ubuntu0.22.04.3 libcephfs2=17.2.5-0ubuntu0.22.04.3 librados2=17.2.5-0ubuntu0.22.04.3 libradosstriper1=17.2.5-0ubuntu0.22.04.3 librbd1=17.2.5-0ubuntu0.22.04.3 librgw2=17.2.5-0ubuntu0.22.04.3 libsqlite3-mod-ceph=17.2.5-0ubuntu0.22.04.3 python3-ceph-argparse=17.2.5-0ubuntu0.22.04.3 python3-ceph-common=17.2.5-0ubuntu0.22.04.3 python3-cephfs=17.2.5-0ubuntu0.22.04.3 python3-rados=17.2.5-0ubuntu0.22.04.3 python3-rbd=17.2.5-0ubuntu0.22.04.3 radosgw=17.2.5-0ubuntu0.22.04.3

Just to make it quick :-D

Revision history for this message
Alan Baghumian (alanbach) wrote :

Do not downgrade the MON(s). They will crash and not come back online. You'll end up keeping them at the 17.2.6 level.

Revision history for this message
Alan Baghumian (alanbach) wrote :

I made some changes to the cluster, added 2 OSD(s) and set the memory target on all OSD(s) to 2GB. All 3 MON units stayed at 17.2.6 level while these tests were executed.

17.2.5 (Take 1)
---------------
READ: bw=32.4MiB/s (34.0MB/s), 17.9KiB/s-6638KiB/s (18.3kB/s-6798kB/s), io=28.7GiB (30.8GB), run=901248-907080msec
WRITE: bw=22.6MiB/s (23.7MB/s), 5986B/s-7797KiB/s (5986B/s-7984kB/s), io=20.0GiB (21.5GB), run=900741-907002msec

17.2.5 (Take 2)
---------------
READ: bw=34.2MiB/s (35.9MB/s), 24.7KiB/s-8038KiB/s (25.3kB/s-8231kB/s), io=30.5GiB (32.7GB), run=900998-912168msec
WRITE: bw=21.0MiB/s (22.0MB/s), 8020B/s-5320KiB/s (8020B/s-5448kB/s), io=18.7GiB (20.1GB), run=907270-912159msec

17.2.6 (Take 1)
---------------
READ: bw=39.3MiB/s (41.2MB/s), 22.7KiB/s-8074KiB/s (23.2kB/s-8268kB/s), io=34.8GiB (37.4GB), run=903690-908029msec
WRITE: bw=20.8MiB/s (21.8MB/s), 7381B/s-5413KiB/s (7381B/s-5543kB/s), io=18.4GiB (19.8GB), run=905688-907997msec

17.2.6 (Take 2)
---------------
READ: bw=39.3MiB/s (41.2MB/s), 22.7KiB/s-8074KiB/s (23.2kB/s-8268kB/s), io=34.8GiB (37.4GB), run=903690-908029msec
WRITE: bw=20.8MiB/s (21.8MB/s), 7381B/s-5413KiB/s (7381B/s-5543kB/s), io=18.4GiB (19.8GB), run=905688-907997msec

Revision history for this message
Alan Baghumian (alanbach) wrote :

Additional notes:

- All tests have been executed on a 500GB Ceph backed volume that was about 70% full.
- VM Details (2 vCPU - 2GB RAM):

$ lsb_release -ia
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.2 LTS
Release: 22.04
Codename: jammy

$ uname -a
Linux juju-c57c0c-landscape-nfs-0 5.15.0-75-generic #82-Ubuntu SMP Tue Jun 6 23:10:23 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

James Page (james-page)
Changed in cloud-archive:
status: New → Invalid
Revision history for this message
James Page (james-page) wrote :
Revision history for this message
James Page (james-page) wrote : Please test proposed package

Hello Chris, or anyone else affected,

Accepted ceph into wallaby-proposed. The package will build now and be available in the Ubuntu Cloud Archive in a few hours, and then in the -proposed repository.

Please help us by testing this new package. To enable the -proposed repository:

  sudo add-apt-repository cloud-archive:wallaby-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-wallaby-needed to verification-wallaby-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-wallaby-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

tags: added: verification-wallaby-needed
Revision history for this message
James Page (james-page) wrote :

Hello Chris, or anyone else affected,

Accepted ceph into xena-proposed. The package will build now and be available in the Ubuntu Cloud Archive in a few hours, and then in the -proposed repository.

Please help us by testing this new package. To enable the -proposed repository:

  sudo add-apt-repository cloud-archive:xena-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-xena-needed to verification-xena-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-xena-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

tags: added: verification-xena-needed
Revision history for this message
James Page (james-page) wrote :

focal/xena-proposed:

2023-07-23 13:49:54.070843 | focal-medium | ======
2023-07-23 13:49:54.070860 | focal-medium | Totals
2023-07-23 13:49:54.070873 | focal-medium | ======
2023-07-23 13:49:54.070885 | focal-medium | Ran: 179 tests in 581.5165 sec.
2023-07-23 13:49:54.070919 | focal-medium | - Passed: 167
2023-07-23 13:49:54.070932 | focal-medium | - Skipped: 12
2023-07-23 13:49:54.070945 | focal-medium | - Expected Fail: 0
2023-07-23 13:49:54.070958 | focal-medium | - Unexpected Success: 0
2023-07-23 13:49:54.070970 | focal-medium | - Failed: 0
2023-07-23 13:49:54.070983 | focal-medium | Sum of execute time for each test: 1199.5669 sec.

Revision history for this message
James Page (james-page) wrote :

focal/wallaby-proposed:

2023-07-23 12:19:32.377012 | focal-medium | ======
2023-07-23 12:19:32.377020 | focal-medium | Totals
2023-07-23 12:19:32.377026 | focal-medium | ======
2023-07-23 12:19:32.377032 | focal-medium | Ran: 179 tests in 566.6631 sec.
2023-07-23 12:19:32.377039 | focal-medium | - Passed: 167
2023-07-23 12:19:32.377045 | focal-medium | - Skipped: 12
2023-07-23 12:19:32.377051 | focal-medium | - Expected Fail: 0
2023-07-23 12:19:32.377057 | focal-medium | - Unexpected Success: 0
2023-07-23 12:19:32.377063 | focal-medium | - Failed: 0
2023-07-23 12:19:32.377069 | focal-medium | Sum of execute time for each test: 1313.0410 sec.

tags: added: verification-wallaby-done verification-xena-done
removed: verification-wallaby-needed verification-xena-needed
Changed in ceph (Ubuntu):
status: Confirmed → Fix Released
Revision history for this message
James Page (james-page) wrote :

Marking Ubuntu task as fix released as this was included in 17.2.6

Revision history for this message
James Page (james-page) wrote : Update Released

The verification of the Stable Release Update for ceph has completed successfully and the package has now been released to -updates. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Revision history for this message
James Page (james-page) wrote :

This bug was fixed in the package ceph - 16.2.13-0ubuntu0.21.04.1~cloud1
---------------

 ceph (16.2.13-0ubuntu0.21.04.1~cloud1) focal-wallaby; urgency=medium
 .
    * Ensure Ceph is linked with and uses tcmalloc, resolving significant
      performance and memory footprint regresssions LP: #2016845):
     - d/p/bug2016845-*.patch: Cherry pick fixes to build system to re-
       enable use of tcmalloc.

Revision history for this message
James Page (james-page) wrote :

The verification of the Stable Release Update for ceph has completed successfully and the package has now been released to -updates. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Revision history for this message
James Page (james-page) wrote :

This bug was fixed in the package ceph - 16.2.13-0ubuntu0.21.10.1~cloud1
---------------

 ceph (16.2.13-0ubuntu0.21.10.1~cloud1) focal-xena; urgency=medium
 .
    * Ensure Ceph is linked with and uses tcmalloc, resolving significant
      performance and memory footprint regresssions LP: #2016845):
     - d/p/bug2016845-*.patch: Cherry pick fixes to build system to re-
       enable use of tcmalloc.

Revision history for this message
Alan Baghumian (alanbach) wrote :

I upgraded my "NVME" cluster (26 OSDs 2.5GBE network) today and pasting the test results before and after the upgrade. Please note that all MON units were rebooted after the upgrade and all OSD services were also restarted on the OSD units. This cluster does not have any active I/O going on. Unlike the Quincy cluster, the performance gain is not much observable here:

16.2.11 (Take 1)
----------------
Run status group 0 (all jobs):
   READ: bw=197MiB/s (207MB/s), 127KiB/s-27.0MiB/s (130kB/s-28.3MB/s), io=348GiB (373GB), run=1800279-1803879msec
  WRITE: bw=137MiB/s (143MB/s), 42.0KiB/s-23.6MiB/s (43.0kB/s-24.7MB/s), io=241GiB (258GB), run=1800279-1803431msec

16.2.11 (Take 2)
----------------
Run status group 0 (all jobs):
   READ: bw=207MiB/s (217MB/s), 113KiB/s-27.8MiB/s (116kB/s-29.2MB/s), io=365GiB (392GB), run=1800132-1805327msec
  WRITE: bw=117MiB/s (123MB/s), 37.6KiB/s-25.8MiB/s (38.5kB/s-27.1MB/s), io=207GiB (222GB), run=1801581-1805342msec

16.2.13 (Take 1)
----------------
Run status group 0 (all jobs):
   READ: bw=199MiB/s (208MB/s), 138KiB/s-27.1MiB/s (141kB/s-28.4MB/s), io=350GiB (376GB), run=1800388-1803945msec
  WRITE: bw=121MiB/s (127MB/s), 45.7KiB/s-19.5MiB/s (46.8kB/s-20.5MB/s), io=214GiB (229GB), run=1800507-1803378msec

16.2.13 (Take 2)
----------------
Run status group 0 (all jobs):
   READ: bw=173MiB/s (182MB/s), 123KiB/s-27.7MiB/s (126kB/s-29.1MB/s), io=305GiB (327GB), run=1800106-1802642msec
  WRITE: bw=123MiB/s (129MB/s), 40.8KiB/s-19.6MiB/s (41.7kB/s-20.6MB/s), io=216GiB (232GB), run=1800290-1802642msec

tcmalloc is linked and being used (From an OSD unit):

$ sudo ps aux | grep ceph-osd
ubuntu 13521 0.0 0.0 8164 660 pts/2 S+ 21:49 0:00 grep --color=auto ceph-osd
ceph 1963802 6.1 5.1 1536184 842856 ? Ssl 20:46 3:53 /usr/bin/ceph-osd -f --cluster ceph --id 24 --setuser ceph --setgroup ceph
ceph 1963806 6.4 5.6 1604864 918996 ? Ssl 20:46 4:03 /usr/bin/ceph-osd -f --cluster ceph --id 25 --setuser ceph --setgroup ceph

$ sudo fuser /lib/x86_64-linux-gnu/libtcmalloc.so.4
/usr/lib/x86_64-linux-gnu/libtcmalloc.so.4.5.3: 1963802m 1963806m

Revision history for this message
gu (cshaven) wrote :

I had a similar problem with version 17.2.5. When I modified osd_memory_target 2G, I confirmed that it has taken effect. But the memory usage is more than 2GB. View the ceph-osd process from top.

Revision history for this message
gu (cshaven) wrote :
Revision history for this message
gu (cshaven) wrote :

root@whmgt02:~# ceph tell osd.2 heap start_profiler
Error ENOTSUP: could not issue heap profiler command -- not using tcmalloc!

Revision history for this message
Alan Baghumian (alanbach) wrote :

I upgraded my "NVME" cluster over the weekend to Quincy (17.2.6) and thought to post some tests results to compare against the last 16.2.11 and 16.2.13 tests:

17.2.6 (Take 1)
---------------
READ: bw=198MiB/s (207MB/s), 125KiB/s-28.6MiB/s (128kB/s-30.0MB/s), io=349GiB (375GB), run=1800118-1807493msec
WRITE: bw=144MiB/s (151MB/s), 41.4KiB/s-22.7MiB/s (42.4kB/s-23.8MB/s), io=254GiB (273GB), run=1800932-1807005msec

17.2.6 (Take 2)
---------------
READ: bw=191MiB/s (200MB/s), 98.1KiB/s-25.3MiB/s (100kB/s-26.5MB/s), io=336GiB (361GB), run=1800055-1802717msec
WRITE: bw=107MiB/s (112MB/s), 32.6KiB/s-24.0MiB/s (33.3kB/s-25.2MB/s), io=188GiB (202GB), run=1800808-1802946msec

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.