Comment 0 for bug 1574342

Revision history for this message
Hajo Möller (dasjoe) wrote :

arcstat.py and arc_summary.py are valuable tools to determine ZFS' ARC usage, it is not obvious why they are not included in zfsutils-linux. As ubuntu-minimal already depends on python3 it should be safe to assume python is available, or am I mistaken here?

arcstat.py gives an iostat-like overview about ARC reads, hit rate, current and target size in regular intervals:
# ./arcstat.py 1
    time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
19:22:33 0 0 0 0 0 0 0 0 0 2.0G 7.8G
19:22:34 3 0 0 0 0 0 0 0 0 2.0G 7.8G
19:22:35 21 0 0 0 0 0 0 0 0 2.0G 7.8G
^C

arc_summary.py shows a more detailed overview of the current ARC status and ZFS tunables:
# ./arc_summary.py

------------------------------------------------------------------------
ZFS Subsystem Report Sun Apr 24 19:23:25 2016
ARC Summary: (HEALTHY)
 Memory Throttle Count: 0

ARC Misc:
 Deleted: 46
 Mutex Misses: 0
 Evict Skips: 0

ARC Size: 25.10% 1.95 GiB
 Target Size: (Adaptive) 100.00% 7.78 GiB
 Min Size (Hard Limit): 0.40% 32.00 MiB
 Max Size (High Water): 248:1 7.78 GiB

ARC Size Breakdown:
 Recently Used Cache Size: 50.00% 3.89 GiB
 Frequently Used Cache Size: 50.00% 3.89 GiB

ARC Hash Breakdown:
 Elements Max: 32.31k
 Elements Current: 99.78% 32.24k
 Collisions: 40.54k
 Chain Max: 3
 Chains: 240

ARC Total accesses: 4.54m
 Cache Hit Ratio: 99.39% 4.51m
 Cache Miss Ratio: 0.61% 27.74k
 Actual Hit Ratio: 98.76% 4.48m

 Data Demand Efficiency: 99.73% 3.23m
 Data Prefetch Efficiency: 11.32% 6.41k

 CACHE HITS BY CACHE LIST:
   Anonymously Used: 0.64% 28.65k
   Most Recently Used: 21.90% 987.29k
   Most Frequently Used: 77.47% 3.49m
   Most Recently Used Ghost: 0.00% 0
   Most Frequently Used Ghost: 0.00% 0

 CACHE HITS BY DATA TYPE:
   Demand Data: 71.40% 3.22m
   Prefetch Data: 0.02% 725
   Demand Metadata: 27.97% 1.26m
   Prefetch Metadata: 0.62% 27.92k

 CACHE MISSES BY DATA TYPE:
   Demand Data: 31.81% 8.82k
   Prefetch Data: 20.48% 5.68k
   Demand Metadata: 21.99% 6.10k
   Prefetch Metadata: 25.72% 7.13k

File-Level Prefetch: (HEALTHY)
DMU Efficiency: 36.39m
 Hit Ratio: 93.36% 33.97m
 Miss Ratio: 6.64% 2.42m

 Colinear: 2.42m
   Hit Ratio: 0.02% 505
   Miss Ratio: 99.98% 2.42m

 Stride: 33.94m
   Hit Ratio: 100.00% 33.94m
   Miss Ratio: 0.00% 14

DMU Misc:
 Reclaim: 2.42m
   Successes: 2.46% 59.51k
   Failures: 97.54% 2.36m

 Streams: 35.39k
   +Resets: 0.05% 18
   -Resets: 99.95% 35.37k
   Bogus: 0

ZFS Tunable:
 metaslab_debug_load 0
 zfs_arc_min_prefetch_lifespan 0
 zfetch_max_streams 8
 zfs_nopwrite_enabled 1
 zfetch_min_sec_reap 2
 zfs_dbgmsg_enable 0
 zfs_dirty_data_max_max_percent 25
 zfs_arc_p_aggressive_disable 1
 spa_load_verify_data 1
 zfs_zevent_cols 80
 zfs_dirty_data_max_percent 10
 zfs_sync_pass_dont_compress 5
 l2arc_write_max 8388608
 zfs_vdev_scrub_max_active 2
 zfs_vdev_sync_write_min_active 10
 zvol_prefetch_bytes 131072
 metaslab_aliquot 524288
 zfs_no_scrub_prefetch 0
 zfs_arc_shrink_shift 0
 zfetch_block_cap 256
 zfs_txg_history 0
 zfs_delay_scale 500000
 zfs_vdev_async_write_active_min_dirty_percent 30
 metaslab_debug_unload 0
 zfs_read_history 0
 zvol_max_discard_blocks 16384
 zfs_recover 0
 l2arc_headroom 2
 zfs_deadman_synctime_ms 1000000
 zfs_scan_idle 50
 zfs_free_min_time_ms 1000
 zfs_dirty_data_max 1670300876
 zfs_vdev_async_read_min_active 1
 zfs_mg_noalloc_threshold 0
 zfs_dedup_prefetch 0
 zfs_vdev_max_active 1000
 l2arc_write_boost 8388608
 zfs_resilver_min_time_ms 3000
 zfs_vdev_async_write_max_active 10
 zil_slog_limit 1048576
 zfs_prefetch_disable 0
 zfs_resilver_delay 2
 metaslab_lba_weighting_enabled 1
 zfs_mg_fragmentation_threshold 85
 l2arc_feed_again 1
 zfs_zevent_console 0
 zfs_immediate_write_sz 32768
 zfs_dbgmsg_maxsize 4194304
 zfs_free_leak_on_eio 0
 zfs_deadman_enabled 1
 metaslab_bias_enabled 1
 zfs_arc_p_dampener_disable 1
 zfs_object_mutex_size 64
 zfs_metaslab_fragmentation_threshold 70
 zfs_no_scrub_io 0
 metaslabs_per_vdev 200
 zfs_dbuf_state_index 0
 zfs_vdev_sync_read_min_active 10
 metaslab_fragmentation_factor_enabled 1
 zvol_inhibit_dev 0
 zfs_vdev_async_write_active_max_dirty_percent 60
 zfs_vdev_cache_size 0
 zfs_vdev_mirror_switch_us 10000
 zfs_dirty_data_sync 67108864
 spa_config_path /etc/zfs/zpool.cache
 zfs_dirty_data_max_max 4175752192
 zfs_arc_lotsfree_percent 10
 zfs_zevent_len_max 128
 zfs_scan_min_time_ms 1000
 zfs_arc_sys_free 0
 zfs_arc_meta_strategy 1
 zfs_vdev_cache_bshift 16
 zfs_arc_meta_adjust_restarts 4096
 zfs_max_recordsize 1048576
 zfs_vdev_scrub_min_active 1
 zfs_vdev_read_gap_limit 32768
 zfs_arc_meta_limit 0
 zfs_vdev_sync_write_max_active 10
 l2arc_norw 0
 zfs_arc_meta_prune 10000
 metaslab_preload_enabled 1
 l2arc_nocompress 0
 zvol_major 230
 zfs_vdev_aggregation_limit 131072
 zfs_flags 0
 spa_asize_inflation 24
 zfs_admin_snapshot 0
 l2arc_feed_secs 1
 zio_taskq_batch_pct 75
 zfs_sync_pass_deferred_free 2
 zfs_disable_dup_eviction 0
 zfs_arc_grow_retry 0
 zfs_read_history_hits 0
 zfs_vdev_async_write_min_active 1
 zfs_vdev_async_read_max_active 3
 zfs_scrub_delay 4
 zfs_delay_min_dirty_percent 60
 zfs_free_max_blocks 100000
 zfs_vdev_cache_max 16384
 zio_delay_max 30000
 zfs_top_maxinflight 32
 spa_slop_shift 5
 zfs_vdev_write_gap_limit 4096
 spa_load_verify_metadata 1
 spa_load_verify_maxinflight 10000
 l2arc_noprefetch 1
 zfs_vdev_scheduler noop
 zfs_expire_snapshot 300
 zfs_sync_pass_rewrite 2
 zil_replay_disable 0
 zfs_nocacheflush 0
 zfs_arc_max 0
 zfs_arc_min 0
 zfs_read_chunk_size 1048576
 zfs_txg_timeout 5
 zfs_pd_bytes_max 52428800
 l2arc_headroom_boost 200
 zfs_send_corrupt_data 0
 l2arc_feed_min_ms 200
 zfs_arc_meta_min 0
 zfs_arc_average_blocksize 8192
 zfetch_array_rd_sz 1048576
 zfs_autoimport_disable 1
 zfs_arc_p_min_shift 0
 zio_requeue_io_start_cut_in_line 1
 zfs_vdev_sync_read_max_active 10
 zfs_mdcomp_disable 0
 zfs_arc_num_sublists_per_state 4

Both tools are well-documented and distributed upstream, see
https://github.com/zfsonlinux/zfs/blob/master/cmd/arcstat/arcstat.py
https://github.com/zfsonlinux/zfs/blob/master/cmd/arc_summary/arc_summary.py