arcstat.py and arc_summary.py are valuable tools to determine ZFS' ARC usage, it is not obvious why they are not included in zfsutils-linux. As ubuntu-minimal already depends on python3 it should be safe to assume python is available, or am I mistaken here?
arcstat.py gives an iostat-like overview about ARC reads, hit rate, current and target size in regular intervals:
# ./arcstat.py 1
time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
19:22:33 0 0 0 0 0 0 0 0 0 2.0G 7.8G
19:22:34 3 0 0 0 0 0 0 0 0 2.0G 7.8G
19:22:35 21 0 0 0 0 0 0 0 0 2.0G 7.8G
^C
arc_summary.py shows a more detailed overview of the current ARC status and ZFS tunables:
# ./arc_summary.py
ARC Size Breakdown:
Recently Used Cache Size: 50.00% 3.89 GiB
Frequently Used Cache Size: 50.00% 3.89 GiB
ARC Hash Breakdown:
Elements Max: 32.31k
Elements Current: 99.78% 32.24k
Collisions: 40.54k
Chain Max: 3
Chains: 240
ARC Total accesses: 4.54m
Cache Hit Ratio: 99.39% 4.51m
Cache Miss Ratio: 0.61% 27.74k
Actual Hit Ratio: 98.76% 4.48m
Data Demand Efficiency: 99.73% 3.23m
Data Prefetch Efficiency: 11.32% 6.41k
CACHE HITS BY CACHE LIST:
Anonymously Used: 0.64% 28.65k
Most Recently Used: 21.90% 987.29k
Most Frequently Used: 77.47% 3.49m
Most Recently Used Ghost: 0.00% 0
Most Frequently Used Ghost: 0.00% 0
arcstat.py and arc_summary.py are valuable tools to determine ZFS' ARC usage, it is not obvious why they are not included in zfsutils-linux. As ubuntu-minimal already depends on python3 it should be safe to assume python is available, or am I mistaken here?
arcstat.py gives an iostat-like overview about ARC reads, hit rate, current and target size in regular intervals:
# ./arcstat.py 1
time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
19:22:33 0 0 0 0 0 0 0 0 0 2.0G 7.8G
19:22:34 3 0 0 0 0 0 0 0 0 2.0G 7.8G
19:22:35 21 0 0 0 0 0 0 0 0 2.0G 7.8G
^C
arc_summary.py shows a more detailed overview of the current ARC status and ZFS tunables:
# ./arc_summary.py
------- ------- ------- ------- ------- ------- ------- ------- ------- ------- --
ZFS Subsystem Report Sun Apr 24 19:23:25 2016
ARC Summary: (HEALTHY)
Memory Throttle Count: 0
ARC Misc:
Deleted: 46
Mutex Misses: 0
Evict Skips: 0
ARC Size: 25.10% 1.95 GiB
Target Size: (Adaptive) 100.00% 7.78 GiB
Min Size (Hard Limit): 0.40% 32.00 MiB
Max Size (High Water): 248:1 7.78 GiB
ARC Size Breakdown:
Recently Used Cache Size: 50.00% 3.89 GiB
Frequently Used Cache Size: 50.00% 3.89 GiB
ARC Hash Breakdown:
Elements Max: 32.31k
Elements Current: 99.78% 32.24k
Collisions: 40.54k
Chain Max: 3
Chains: 240
ARC Total accesses: 4.54m
Cache Hit Ratio: 99.39% 4.51m
Cache Miss Ratio: 0.61% 27.74k
Actual Hit Ratio: 98.76% 4.48m
Data Demand Efficiency: 99.73% 3.23m
Data Prefetch Efficiency: 11.32% 6.41k
CACHE HITS BY CACHE LIST:
Anonymously Used: 0.64% 28.65k
Most Recently Used: 21.90% 987.29k
Most Frequently Used: 77.47% 3.49m
Most Recently Used Ghost: 0.00% 0
Most Frequently Used Ghost: 0.00% 0
CACHE HITS BY DATA TYPE:
Demand Data: 71.40% 3.22m
Prefetch Data: 0.02% 725
Demand Metadata: 27.97% 1.26m
Prefetch Metadata: 0.62% 27.92k
CACHE MISSES BY DATA TYPE:
Demand Data: 31.81% 8.82k
Prefetch Data: 20.48% 5.68k
Demand Metadata: 21.99% 6.10k
Prefetch Metadata: 25.72% 7.13k
File-Level Prefetch: (HEALTHY)
DMU Efficiency: 36.39m
Hit Ratio: 93.36% 33.97m
Miss Ratio: 6.64% 2.42m
Colinear: 2.42m
Hit Ratio: 0.02% 505
Miss Ratio: 99.98% 2.42m
Stride: 33.94m
Hit Ratio: 100.00% 33.94m
Miss Ratio: 0.00% 14
DMU Misc:
Reclaim: 2.42m
Successes: 2.46% 59.51k
Failures: 97.54% 2.36m
Streams: 35.39k
+Resets: 0.05% 18
-Resets: 99.95% 35.37k
Bogus: 0
ZFS Tunable: debug_load 0 min_prefetch_ lifespan 0 enabled 1 min_sec_ reap 2 data_max_ max_percent 25 p_aggressive_ disable 1 verify_ data 1 data_max_ percent 10 pass_dont_ compress 5 scrub_max_ active 2 sync_write_ min_active 10 bytes 131072 scrub_prefetch 0 shrink_ shift 0 async_write_ active_ min_dirty_ percent 30 debug_unload 0 discard_ blocks 16384 synctime_ ms 1000000 min_time_ ms 1000 async_read_ min_active 1 noalloc_ threshold 0 max_active 1000 min_time_ ms 3000 async_write_ max_active 10 disable 0 lba_weighting_ enabled 1 fragmentation_ threshold 85 write_sz 32768 leak_on_ eio 0 enabled 1 bias_enabled 1 p_dampener_ disable 1 mutex_size 64 fragmentation_ threshold 70 state_index 0 sync_read_ min_active 10 fragmentation_ factor_ enabled 1 async_write_ active_ max_dirty_ percent 60 cache_size 0 mirror_ switch_ us 10000 data_sync 67108864 zpool.cache data_max_ max 4175752192 lotsfree_ percent 10 min_time_ ms 1000 meta_strategy 1 cache_bshift 16 meta_adjust_ restarts 4096 scrub_min_ active 1 read_gap_ limit 32768 sync_write_ max_active 10 preload_ enabled 1 aggregation_ limit 131072 inflation 24 batch_pct 75 pass_deferred_ free 2 dup_eviction 0 history_ hits 0 async_write_ min_active 1 async_read_ max_active 3 min_dirty_ percent 60 max_blocks 100000 maxinflight 32 write_gap_ limit 4096 verify_ metadata 1 verify_ maxinflight 10000 snapshot 300 pass_rewrite 2 chunk_size 1048576 headroom_ boost 200 corrupt_ data 0 average_ blocksize 8192 _disable 1 p_min_shift 0 io_start_ cut_in_ line 1 sync_read_ max_active 10 num_sublists_ per_state 4
metaslab_
zfs_arc_
zfetch_max_streams 8
zfs_nopwrite_
zfetch_
zfs_dbgmsg_enable 0
zfs_dirty_
zfs_arc_
spa_load_
zfs_zevent_cols 80
zfs_dirty_
zfs_sync_
l2arc_write_max 8388608
zfs_vdev_
zfs_vdev_
zvol_prefetch_
metaslab_aliquot 524288
zfs_no_
zfs_arc_
zfetch_block_cap 256
zfs_txg_history 0
zfs_delay_scale 500000
zfs_vdev_
metaslab_
zfs_read_history 0
zvol_max_
zfs_recover 0
l2arc_headroom 2
zfs_deadman_
zfs_scan_idle 50
zfs_free_
zfs_dirty_data_max 1670300876
zfs_vdev_
zfs_mg_
zfs_dedup_prefetch 0
zfs_vdev_
l2arc_write_boost 8388608
zfs_resilver_
zfs_vdev_
zil_slog_limit 1048576
zfs_prefetch_
zfs_resilver_delay 2
metaslab_
zfs_mg_
l2arc_feed_again 1
zfs_zevent_console 0
zfs_immediate_
zfs_dbgmsg_maxsize 4194304
zfs_free_
zfs_deadman_
metaslab_
zfs_arc_
zfs_object_
zfs_metaslab_
zfs_no_scrub_io 0
metaslabs_per_vdev 200
zfs_dbuf_
zfs_vdev_
metaslab_
zvol_inhibit_dev 0
zfs_vdev_
zfs_vdev_
zfs_vdev_
zfs_dirty_
spa_config_path /etc/zfs/
zfs_dirty_
zfs_arc_
zfs_zevent_len_max 128
zfs_scan_
zfs_arc_sys_free 0
zfs_arc_
zfs_vdev_
zfs_arc_
zfs_max_recordsize 1048576
zfs_vdev_
zfs_vdev_
zfs_arc_meta_limit 0
zfs_vdev_
l2arc_norw 0
zfs_arc_meta_prune 10000
metaslab_
l2arc_nocompress 0
zvol_major 230
zfs_vdev_
zfs_flags 0
spa_asize_
zfs_admin_snapshot 0
l2arc_feed_secs 1
zio_taskq_
zfs_sync_
zfs_disable_
zfs_arc_grow_retry 0
zfs_read_
zfs_vdev_
zfs_vdev_
zfs_scrub_delay 4
zfs_delay_
zfs_free_
zfs_vdev_cache_max 16384
zio_delay_max 30000
zfs_top_
spa_slop_shift 5
zfs_vdev_
spa_load_
spa_load_
l2arc_noprefetch 1
zfs_vdev_scheduler noop
zfs_expire_
zfs_sync_
zil_replay_disable 0
zfs_nocacheflush 0
zfs_arc_max 0
zfs_arc_min 0
zfs_read_
zfs_txg_timeout 5
zfs_pd_bytes_max 52428800
l2arc_
zfs_send_
l2arc_feed_min_ms 200
zfs_arc_meta_min 0
zfs_arc_
zfetch_array_rd_sz 1048576
zfs_autoimport
zfs_arc_
zio_requeue_
zfs_vdev_
zfs_mdcomp_disable 0
zfs_arc_
Both tools are well-documented and distributed upstream, see /github. com/zfsonlinux/ zfs/blob/ master/ cmd/arcstat/ arcstat. py /github. com/zfsonlinux/ zfs/blob/ master/ cmd/arc_ summary/ arc_summary. py
https:/
https:/