I fell into two Dell Powervault storage arrays that each hold 15 disks. I have 30 750GB drives that I intend to fill them with. I want to create a high availiablity centralized storage server primarily for our Xen servers. I am hoping to use deduplication since the servers contain 32GB of RAM each and we deploy about 10-20 identical servers every couple of months.
I created a single raidz2 array with dedup enabled of 10 disks for testing on a single host. Performance is good for the sequential tests I did (>200MB/s). However if I try to export the array via NFS, performance drops like a rock well under 10MB/s. I can't figure out what is doing it.
I enabled sharing by doing [CMD=""]zfs set sharenfs=on /virt[/CMD] and allowed it in /etc/exports.
My NFS settings in /etc/rc.conf are
The performance I get is horrible on a Xen server (CentOS based) only one hop away using the following command where ubu is the name of server:
[CMD=""]mount -o async,rsize=128000,wsize=128000,nolock,tcp ubu:/virt /mnt[/CMD]
I turned off the ZFS intent log (ZIL) and performance didn't change.
If I scp something, performance is normal. So I don't think it is the switch. The other odd thing is, if I use the same NFS options from above (minus the nolock) and mount the NFS share locally, the speed is ~80MB/s which is much less than >200MB/s but much higher than <10MB/s. Also, if I create a NFS share on the local gmirror array and mount it on the Xen server, low performance is also observed. Therefore I feel like it isn't ZFS, but NFS is so simple to set up I don't get how I could have screwed that up.
Any ideas? Also, if I am barking up the wrong tree trying to use NFS to share my ZFS array let me know. I am new to the ZFS scene and especially high availability. I think once I get a handle on the NFS performance I will start looking into HAST and CARP. So if anyone sees some red flags or has some advice for me, I would love to hear it. If you need more information, please ask.
Thanks in advance.
I created a single raidz2 array with dedup enabled of 10 disks for testing on a single host. Performance is good for the sequential tests I did (>200MB/s). However if I try to export the array via NFS, performance drops like a rock well under 10MB/s. I can't figure out what is doing it.
I enabled sharing by doing [CMD=""]zfs set sharenfs=on /virt[/CMD] and allowed it in /etc/exports.
My NFS settings in /etc/rc.conf are
Code:
rpcbind_enable="YES"
nfs_reserved_port_only="YES"
nfs_server_enable="YES"
nfs_server_flags="-u -t -n 10"
nfs_client_enable="NO"
nfs_client_flags="-n 4"
rpc_lockd_enable="NO"
rpc_statd_enable="NO"
mountd_enable="YES"
The performance I get is horrible on a Xen server (CentOS based) only one hop away using the following command where ubu is the name of server:
[CMD=""]mount -o async,rsize=128000,wsize=128000,nolock,tcp ubu:/virt /mnt[/CMD]
I turned off the ZFS intent log (ZIL) and performance didn't change.
If I scp something, performance is normal. So I don't think it is the switch. The other odd thing is, if I use the same NFS options from above (minus the nolock) and mount the NFS share locally, the speed is ~80MB/s which is much less than >200MB/s but much higher than <10MB/s. Also, if I create a NFS share on the local gmirror array and mount it on the Xen server, low performance is also observed. Therefore I feel like it isn't ZFS, but NFS is so simple to set up I don't get how I could have screwed that up.
Any ideas? Also, if I am barking up the wrong tree trying to use NFS to share my ZFS array let me know. I am new to the ZFS scene and especially high availability. I think once I get a handle on the NFS performance I will start looking into HAST and CARP. So if anyone sees some red flags or has some advice for me, I would love to hear it. If you need more information, please ask.
Thanks in advance.