ceph-osd configurations are not optimized for all flash deployment

Bug #1716111 reported by Nobuto Murata
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Ceph OSD Charm
Triaged
Wishlist
Unassigned

Bug Description

As far as I understand, ceph-osd charm has a config option as "autotune" to apply some optimizations for HDDs. However, it's not optimized for all-flash deployment yet.

[lib/ceph/utils.py]
def tune_dev(block_dev):
    """Try to make some intelligent decisions with HDD tuning. Future work will
    include optimizing SSDs.

According to the Intel's tuning info:
http://tracker.ceph.com/projects/ceph/wiki/Tuning_for_All_Flash_Deployments
I recognized a huge improvement on total IOPS optimized scenario. In short, I only saw 40K total IOPS on 4K write from multi clients, but after applying the configuration below, I see 140K total IOPS with the same test.

It would be nice if ceph-osd charm is capable of applying some tunings for all flash deployment.

====

My config-flags for ceph-osd charm converted from:
http://tracker.ceph.com/projects/ceph/wiki/Tuning_for_All_Flash_Deployments#Cephconf

config-flags: >
    {
        global: {
            debug_lockdep: 0/0,
            debug_context: 0/0,
            debug_crush: 0/0,
            debug_buffer: 0/0,
            debug_timer: 0/0,
            debug_filer: 0/0,
            debug_objecter: 0/0,
            debug_rados: 0/0,
            debug_rbd: 0/0,
            debug_journaler: 0/0,
            debug_objectcatcher: 0/0,
            debug_client: 0/0,
            debug_osd: 0/0,
            debug_optracker: 0/0,
            debug_objclass: 0/0,
            debug_filestore: 0/0,
            debug_journal: 0/0,
            debug_ms: 0/0,
            debug_monc: 0/0,
            debug_tp: 0/0,
            debug_auth: 0/0,
            debug_finisher: 0/0,
            debug_heartbeatmap: 0/0,
            debug_perfcounter: 0/0,
            debug_asok: 0/0,
            debug_throttle: 0/0,
            debug_mon: 0/0,
            debug_paxos: 0/0,
            debug_rgw: 0/0,
            ms_type: async,
            "rbd readahead disable after bytes": 0,
            "rbd readahead max bytes": 4194304,
            "filestore xattr use omap": true,
            "osd pg bits": 8,
            "osd pgp bits": 8,
            perf: true,
            mutex_perf_counter: true,
            throttler_perf_counter: false,
            "rbd cache": false
        },
        osd: {
            filestore_queue_max_ops: 5000,
            filestore_queue_max_bytes: 1048576000,
            filestore_max_sync_interval: 10,
            filestore_merge_threshold: 500,
            filestore_split_multiple: 100,
            osd_op_shard_threads: 8,
            journal_max_write_entries: 5000,
            journal_max_write_bytes: 1048576000,
            journal_queue_max_ops: 3000,
            journal_queue_max_bytes: 1048576000,
            ms_dispatch_throttle_bytes: 1048576000,
            objecter_inflight_op_bytes: 1048576000,
            osd_op_threads: 32,
            filestore_queue_committing_max_ops: 5000,
            filestore_queue_committing_max_bytes: 1048576000,
            filestore_wbthrottle_enable: false,
            osd_client_message_size_cap: 0,
            osd_client_message_cap: 0,
            osd_enable_op_tracker: false,
            filestore_fd_cache_size: 64,
            filestore_fd_cache_shards: 32,
            filestore_op_threads: 6
        }
    }

Nobuto Murata (nobuto)
description: updated
Nobuto Murata (nobuto)
tags: added: cpe-onsite
Revision history for this message
James Page (james-page) wrote :

Feature request, triaging as wishlist.

Changed in charm-ceph-osd:
status: New → Triaged
importance: Undecided → Wishlist
Nobuto Murata (nobuto)
description: updated
tags: added: 4010
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.