Samba 4.3 on ZFS Tuning / Performance

I have a FreeBSD 10.3-RELEASE server running samba 4.3.

I'm trying to figure out why I only get about 24MB/s out of samba when my ZFS array is capable of much faster speeds and I am on a gigabit network connection.

Some info:

Samba up/down Speeds: ~24MB/s
SCP up/down speeds: ~ 80MB/s

Code:
$ sudo dd if=/dev/random of=/tank/samba/1GB.txt bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 16.132504 secs (64997721 bytes/sec)

Code:
sudo dd if=/tank/samba/1GB.txt > /dev/null
2048000+0 records in
2048000+0 records out
1048576000 bytes transferred in 5.468385 secs (191752410 bytes/sec)

There are some posts out there about this, but they tend to be specific to Linux, and so i'm not sure that the socket options are applicable to FreeBSD. In any chase, none of the suggestions from a number of blogs and forums posts have made any difference for me. Still stuck at 24MB/s.

I should also note that I am testing from Macs but can test from Windows if required.

Stuff I have already tried / reviewed:

http://www.eggplant.pro/blog/faster-samba-smb-cifs-share-performance/
https://calomel.org/samba_optimize.html
http://plazko.io/apple-osx-finder-i...shared-hard-drive-connected-to-a-wifi-router/
And much, much more...
 
There are some posts out there about this, but they tend to be specific to Linux, and so i'm not sure that the socket options are applicable to FreeBSD.
It's a bit of a hit and miss really. I've found I have to regularly change them with each version of Samba and/or my base system. Sometimes I have to remove them all, sometimes I need to add them. Not sure why I need to keep changing it.

In any case, this is what I have now. Feel free to try any of the options I remarked.
Code:
[global]

   workgroup = DICELAN
   server string = Samba Server
   security = user
   ;hosts allow = 192.168.1. 192.168.2. 127. 

   log file = /var/log/samba4/log.%m
   max log size = 50

   ;socket options = SO_RCVBUF=8192 SO_SNDBUF=8192 TCP_NODELAY
   ;socket options = SO_RCVBUF=131072 SO_SNDBUF=131072 TCP_NODELAY
   ;socket options = TCP_NODELAY IPTOS_LOWDELAY

   ;min receivefile size = 16384
   ;aio read size = 16384
   ;aio write size = 16384
   ;aio write behind = true

   ;use sendfile = no
   smb ports = 445

   ;domain master = yes
   local master = yes
   preferred master = yes
   os level = 65

   interfaces = 192.168.10.190/24

   bind interfaces only = no
 
Never actually measured it, it's mostly based on what I experience accessing files (I use it mainly for media storage). The hardware is a MSI mainboard, Core i5, 8 GB, LSI-2308 SAS/SATA card and 4 3TB Seagate Barracudas (one RAIDZ pool).

As you're having problems with Macs, definitely try it from a Windows machine too. I only have Windows, FreeBSD and a couple of Raspberry Pis running OpenELEC. But I know the Windows file sharing from OS-X can be a bit of pain when it comes to performance. It'll work but not as good as it should.
 
It's a bit of a hit and miss really. I've found I have to regularly change them with each version of Samba and/or my base system. Sometimes I have to remove them all, sometimes I need to add them. Not sure why I need to keep changing it.
I had to remove:
Code:
use sendfile=true
from my smb4.conf (10-STABLE, net/samba43) or clients would randomly lose connectivity to shares on ZFS pools and the SAMBA logfile would be full of "connection reset by peer" errors.

However, performance is still abysmal. I actually have the filesystem exported via NFS to an 8-STABLE, net/samba36 legacy system (yes, I know both of those are EoL) and client performance is much better that way*. At some point I'll have to dig into it unless someone comes up with an explanation / solution before then.

* The network connection between the 10-STABLE and 8-STABLE boxes is 10GbE and real-world transfer speed from data on the ZFS pool is around 750 Mbyte/second. Since the Windows client is on Gigabit Ethernet and not 10GbE, the extra hop doesn't hurt, and as I said above, actually improves performance.
 
Back
Top