Comment 19 for bug 1996010

Revision history for this message
dongdong tao (taodd) wrote (last edit ):

I've tested the proposed package, it can fix the reported bluestore_other_pool leak problem

Testing steps:

1. Deploy a new ceph cluster with the proposed package.

2. Create enough rbd images to spread all over the OSDs

3. Stressingthem with fio 4k randwrite workload in parallel until the OSDs got enough onodes in its cache (more than 60k onodes and you'll see the bluestore_cache_other is over 1 GB):

   fio --name=randwrite --rw=randwrite --ioengine=rbd --bs=4k --direct=1 --numjobs=1 --size=100G --iodepth=16 --clientname=admin --pool=bench --rbdname=test

4. Shrink the pg_num to a very low number so that pgs per osd is around 1.
Once the shrink finished

5. Enable debug_bluestore=20/20, we can no longer observe a 0-sized onode cache by grep max_shard_onodes.