write performance slowdown on ZFS pool with ahci (8.1-RELEASE)

Greetings,

I recently upgraded my ZFS/NFS server to 8.1-RELEASE and tried the ahci driver with a 5GB file creation with dd on a ZFS pool, here are the results :

With AHCI disabled :
Code:
5368709120 bytes transferred in 106.757255 secs (50288939 bytes/sec)

With AHCI enabled :
Code:
5368709120 bytes transferred in 119.206514 secs (45037045 bytes/sec)

the command I performed is : dd if=/dev/zero of=/myfs/file5GB bs=1k count=5M (myfs is a zfs mountpoint)

Any hints ?

Regards,
 
ahci will be slightly worse in situations where eg. benchmarking a dd, but better under loads where many things at once want hdd access, such as server environments.

also in my view zfs works better with ahci when the 2 following set in loader.conf.

Code:
vfs.zfs.vdev.min_pending=4
vfs.zfs.vdev.max_pending=8
 
chrcol said:
ahci will be slightly worse in situations where eg. benchmarking a dd, but better under loads where many things at once want hdd access, such as server environments.

also in my view zfs works better with ahci when the 2 following set in loader.conf.

Code:
vfs.zfs.vdev.min_pending=4
vfs.zfs.vdev.max_pending=8
Your settings helps a little to increase write performance, thank you chrcol.
 
You also shouldn't use dd as a benchmarking tool, especially when using /dev/zero. If you have compression enabled on a ZFS filesystem, you'll get SUPER high write speeds. :)

Either use /dev/random (which may limit the read speed) or use a real filesystem benchmarking tool like bonnie++ or similar.
 
phoenix said:
You also shouldn't use dd as a benchmarking tool, especially when using /dev/zero. If you have compression enabled on a ZFS filesystem, you'll get SUPER high write speeds. :)

Either use /dev/random (which may limit the read speed) or use a real filesystem benchmarking tool like bonnie++ or similar.
Today I found that useful link which mention bonnie(++) too.

I 've been told too that using bs=128k with dd increase writes speed to double. So I realize dd is definitely not a benchmarking tool.

Thank you for your advices
 
sidh said:
I 've been told too that using bs=128k with dd increase writes speed to double. So I realize dd is definitely not a benchmarking tool.
Try using 8-16m (megabytes) for even more performance.
 
chrcol said:
Code:
vfs.zfs.vdev.min_pending=4
vfs.zfs.vdev.max_pending=8

Are these two tunables documented anywhere? I'd like to understand what they do before I apply them to my system.
 
jem said:
Are these two tunables documented anywhere? I'd like to understand what they do before I apply them to my system.

Here mate:
Code:
# [color="Blue"]sysctl -d vfs.zfs.vdev.min_pending[/color]
vfs.zfs.vdev.min_pending: Initial number of I/O requests pending to each device

# [color="#0000ff"]sysctl -d vfs.zfs.vdev.max_pending[/color]
vfs.zfs.vdev.max_pending: Maximum I/O requests pending on each device
 
Hi,

$ sysctl -d vfs.zfs.vdev.min_pending
and
$ sysctl -d vfs.zfs.vdev.max_pending
will give you a comment on those settings;

Regards,
 
yeah its internal FS queuing, but ahci has its own queuing so it seems logical reducing them will be better. I assumed a min and max of 1 would in fact be optimal on ahci (NCQ) but those values seem to work best for me.
 
Back
Top