Comment 21 for bug 1996010

Revision history for this message
dongdong tao (taodd) wrote :

I believe I've done it against the focal-proposed package.

Anyway, I've spent some time to re-verify the package again.
I've performed the testing with both the focal-proposed (15.2.17-0ubuntu0.20.04.5) and cloud-archive: unsure-proposed (15.2.17-0ubuntu0.20.04.5~cloud0) ceph packages, and the testing result is good to me.

Testing steps are:

1. Deploy two new ceph cluster with the focal-proposed and ussuri-proposed ceph package respectively.

2. Create enough rbd images to spread all over the OSDs

3. Stressingthem with fio 4k randwrite workload in parallel until the OSDs got enough onodes in its cache (more than 60k onodes and you'll see the bluestore_cache_other is over 1 GB):

   fio --name=randwrite --rw=randwrite --ioengine=rbd --bs=4k --direct=1 --numjobs=1 --size=100G --iodepth=16 --clientname=admin --pool=bench --rbdname=test

4. Shrink the pg_num to a very low number so that pgs per osd is around 1.
Once the shrink finished

5. Enable debug_bluestore=20/20, we can no longer observe a 0-sized onode cache by grep max_shard_onodes.