Extremely low disk write performance

Bug #346461 reported by Max
2
Affects Status Importance Assigned to Milestone
Nexenta Operating System
New
Undecided
Unassigned

Bug Description

Disk write performance is extremely low on Nexenta 2.0 beta 2, compared to other operating systems running on the same hardware.

Block write performance as measured by bonnie++:

Nexenta 2.0 beta 2: 4 mb/sec
FreeBSD 7.1 (ZFS): 58 mb/sec
Opensolaris preview 2009.06 b109: 74 mb/sec

Disk drive: Mtron Pro 7000 SSD
Disk controller: tried both connecting the drive to the mainboard (Nvidia MCP55 chipset) as well as an Areca 1200 pci-e SATA card.

Also created a forum post, but have received no replies so far: http://www.nexenta.com/corp/index.php?option=com_fireboard&Itemid=44&func=view&catid=11&id=755

Revision history for this message
John Pyper (jpyper) wrote : Re: [Bug 346461] [NEW] Extremely low disk write performance

Try running some basic speed tests like the following:

sudo hdparm -t /dev/your_drive
&
sudo hdparm -T /dev/your_drive

You might have to do some sysctl and ioctl hackery.

I've been running Ubuntu 'hardy' with a generic 500gb eSATA connection
and was running into very low transfer speeds with this drive. Come to
find out, DMA wasn't being enabled for some reason.

I have not tried this with any of the recent Nexenta buillds, so your
results might be different.

I hope this helps a little.

John

On 3/21/09, Max <email address hidden> wrote:
> Public bug reported:
>
> Disk write performance is extremely low on Nexenta 2.0 beta 2, compared
> to other operating systems running on the same hardware.
>
>
> Block write performance as measured by bonnie++:
>
> Nexenta 2.0 beta 2: 4 mb/sec
> FreeBSD 7.1 (ZFS): 58 mb/sec
> Opensolaris preview 2009.06 b109: 74 mb/sec
>
>
> Disk drive: Mtron Pro 7000 SSD
> Disk controller: tried both connecting the drive to the mainboard (Nvidia
> MCP55 chipset) as well as an Areca 1200 pci-e SATA card.
>
> Also created a forum post, but have received no replies so far:
> http://www.nexenta.com/corp/index.php?option=com_fireboard&Itemid=44&func=view&catid=11&id=755
>
> ** Affects: nexenta
> Importance: Undecided
> Status: New
>
> --
> Extremely low disk write performance
> https://bugs.launchpad.net/bugs/346461
> You received this bug notification because you are a member of Nexenta
> Core Team, which is subscribed to Nexenta Operating System.
>

Revision history for this message
Max (mail-to-the-max) wrote :

Is there a Nexenta/Opensolaris version of hdparm available somewhere?
The one at Sourceforge does not compile, and seems to be Linux only (has includes like "linux/types.h").

The system seems to know the disk supports DMA:

==
Mar 22 07:12:15 nexenta pcplusmp: [ID 803547 kern.info] pcplusmp: pci10de,37f (nv_sata) instance 0 vector 0x14 ioapic 0x8 intin 0x14
is bound to cpu 2
Mar 22 07:12:15 nexenta sata: [ID 663010 kern.info] /pci@0,0/pci10de,cb84@5 :
Mar 22 07:12:15 nexenta sata: [ID 761595 kern.info] SATA disk device at port 1
Mar 22 07:12:15 nexenta sata: [ID 846691 kern.info] model MTRON MSP-SATA7035
Mar 22 07:12:15 nexenta sata: [ID 693010 kern.info] firmware 0.20R1
Mar 22 07:12:15 nexenta sata: [ID 163988 kern.info] serial number 0HG1170490056
Mar 22 07:12:15 nexenta sata: [ID 594940 kern.info] supported features:
Mar 22 07:12:15 nexenta sata: [ID 981177 kern.info] 28-bit LBA, DMA, SMART
Mar 22 07:12:15 nexenta sata: [ID 349649 kern.info] capacity = 125001728 sectors
Mar 22 07:12:15 nexenta scsi: [ID 193665 kern.info] sd0 at nv_sata0: target 1 lun 0
Mar 22 07:12:15 nexenta genunix: [ID 936769 kern.info] sd0 is /pci@0,0/pci10de,cb84@5/disk@1,0
Mar 22 07:12:15 nexenta genunix: [ID 408114 kern.info] /pci@0,0/pci10de,cb84@5/disk@1,0 (sd0) online
==

Is there any way to check if DMA is actually being used?

I have the impression the performance problems only occur when writing data.

I did play around with the various ZFS /boot/system options as mentioned in the "ZFS evil tuning guide". As well as mounting the ZFS file system async.
But it did not made any noticable difference.

"man nv_sata" seems to indicate there are no tunable settings on the driver level:

==
CONFIGURATION
       The nv_sata module contains no user configurable parameters.
==

Are there any other settings that impact the disk system?

Revision history for this message
Erast (erast) wrote :

and while doing heavy IO where are we sitting?

lockstat -IkW sleep 15

Revision history for this message
Max (mail-to-the-max) wrote :
Download full text (22.7 KiB)

While running bonnie++:

====
====

Writing with putc():

====
====

root@nexenta:~# lockstat -IkW sleep 15

Profiling interrupt: 17008 events in 21.918 seconds (776 events/sec)

Count indv cuml rcnt nsec Hottest CPU+PIL Caller
-------------------------------------------------------------------------------
16780 99% 99% 0.00 12227 cpu[5] mach_cpu_idle
  118 1% 99% 0.00 2614 cpu[7] (usermode)
   30 0% 100% 0.00 3880 cpu[2] scan_memory
   10 0% 100% 0.00 4194 cpu[2] do_splx
    6 0% 100% 0.00 2896 cpu[7] mutex_enter
    6 0% 100% 0.00 3156 cpu[7] do_copy_fault_nta
    5 0% 100% 0.00 4704 cpu[2]+11 splr
    4 0% 100% 0.00 2715 cpu[7] copy_pattern
    3 0% 100% 0.00 3428 cpu[2]+5 ddi_io_getl
    3 0% 100% 0.00 3871 cpu[2]+5 bcopy
    2 0% 100% 0.00 2025 cpu[0]+11 exp_x
    2 0% 100% 0.00 3400 cpu[2]+5 atomic_add_64
    2 0% 100% 0.00 3844 cpu[2]+5 mtype_func
    1 0% 100% 0.00 4379 cpu[2]+5 rootnex_bind_slowpath
    1 0% 100% 0.00 4177 cpu[7] rootnex_dma_bindhdl
    1 0% 100% 0.00 3078 cpu[2]+11 sleepq_wakeall_chan
    1 0% 100% 0.00 4104 cpu[4] swapfs_getvp
    1 0% 100% 0.00 1831 cpu[0]+11 sleepq_insert
    1 0% 100% 0.00 2903 cpu[4] cpu_update_pct
    1 0% 100% 0.00 2952 cpu[7] kmem_free
    1 0% 100% 0.00 1886 cpu[0] kmem_cache_alloc
    1 0% 100% 0.00 1515 cpu[7] clear_stale_fd
    1 0% 100% 0.00 3515 cpu[3] fsflush_do_pages
    1 0% 100% 0.00 4722 cpu[2]+11 thread_lock
    1 0% 100% 0.00 2068 cpu[1] timeout_generic
    1 0% 100% 0.00 3411 cpu[2]+5 geterror
    1 0% 100% 0.00 2221 cpu[0]+10 sir_on
    1 0% 100% 0.00 1814 cpu[0]+2 av_dispatch_softvect
    1 0% 100% 0.00 2073 cpu[7] page_ctr_sub
    1 0% 100% 0.00 2871 cpu[7] page_hashin
    1 0% 100% 0.00 3803 cpu[2]+5 lgrp_memnode_choose
    1 0% 100% 0.00 2871 cpu[7] lgrp_mem_choose
    1 0% 100% 0.00 1854 cpu[0] disp_getwork
    1 0% 100% 0.00 3938 cpu[2] thread_affinity_set
    1 0% 100% 0.00 1544 cpu[0] idle
    1 0% 100% 0.00 2240 cpu[0]+2 rw_enter
    1 0% 100% 0.00 2110 cpu[1] ...

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.