Solved Disk burst write to ZFS/NFS share

Hi everybody! Happy & healthy 2021 !

My newly assembled FreeBSD server is now up and running. 🥳
Summary of the setup:
CPU: Intel(R) Pentium(R) CPU G4560 @ 3.50GHz (3504.17-MHz K8-class CPU)
Storage: ZFS RAIDZ2 using 5 Toshiba N300 7.2K 4TB
Memory: 32GB, I limited ARC to 16GB in /etc/sysctl.conf
Code:
#limiting ARC to 16GB of memory
vfs.zfs.arc_max = 17179869184

Issue: performance writing to server is not that great, I am running a batch file using robocopy from Windows 10 client (connected via cable, 1GB network). FreeBSD zfs share exported in NFS. The write is bursty, please check the audio sample below.


I have followed the ZFS tuning guide and already applied the ARC memory limiting (before, ARC was eating all the available memory), now I want to address the burt write issue (if it is an issue at all).

ps. I have another post where I ask guidance to add L2ARC using a NVMe SSD How to Shrink Boot to add L2ARC?

Question: any ideas on what can I do to improve data writing performance to the nfs share?

Thanks in advance!

/Antonio


ps. My ZFS statistics that might be of use, retrieved from # zfs-stats -a
Code:
------------------------------------------------------------------------
ZFS Subsystem Report                            Sun Jan  3 15:18:52 2021
------------------------------------------------------------------------

System Information:

        Kernel Version:                         1202000 (osreldate)
        Hardware Platform:                      amd64
        Processor Architecture:                 amd64

        ZFS Storage pool Version:               5000
        ZFS Filesystem Version:                 5

FreeBSD 12.2-RELEASE r366954 GENERIC  3:18PM  up 13:30, 1 user, load averages: 1.99, 0.91, 0.41

------------------------------------------------------------------------

System Memory:

        0.02%   5.79    MiB Active,     0.10%   30.82   MiB Inact
        70.74%  21.97   GiB Wired,      0.00%   0       Bytes Cache
        29.12%  9.04    GiB Free,       0.02%   7.78    MiB Gap

        Real Installed:                         32.00   GiB
        Real Available:                 99.68%  31.90   GiB
        Real Managed:                   97.37%  31.06   GiB

        Logical Total:                          32.00   GiB
        Logical Used:                   71.65%  22.93   GiB
        Logical Free:                   28.35%  9.07    GiB

Kernel Memory:                                  474.88  MiB
        Data:                           92.25%  438.05  MiB
        Text:                           7.75%   36.83   MiB

Kernel Memory Map:                              31.06   GiB
        Size:                           67.92%  21.10   GiB
        Free:                           32.08%  9.96    GiB

------------------------------------------------------------------------

ARC Summary: (HEALTHY)
        Memory Throttle Count:                  0

ARC Misc:
        Deleted:                                6.73    m
        Recycle Misses:                         0
        Mutex Misses:                           20
        Evict Skips:                            12

ARC Size:                               100.16% 16.03   GiB
        Target Size: (Adaptive)         100.00% 16.00   GiB
        Min Size (Hard Limit):          23.48%  3.76    GiB
        Max Size (High Water):          4:1     16.00   GiB

ARC Size Breakdown:
        Recently Used Cache Size:       99.17%  15.89   GiB
        Frequently Used Cache Size:     0.83%   136.55  MiB

ARC Hash Breakdown:
        Elements Max:                           440.97  k
        Elements Current:               99.83%  440.21  k
        Collisions:                             362.75  k
        Chain Max:                              4
        Chains:                                 21.32   k

------------------------------------------------------------------------

ARC Efficiency:                                 42.84   m
        Cache Hit Ratio:                99.79%  42.75   m
        Cache Miss Ratio:               0.21%   88.64   k
        Actual Hit Ratio:               99.79%  42.75   m

        Data Demand Efficiency:         96.92%  22.42   k
        Data Prefetch Efficiency:       55.32%  1.72    k

        CACHE HITS BY CACHE LIST:
          Most Recently Used:           2.73%   1.17    m
          Most Frequently Used:         97.26%  41.58   m
          Most Recently Used Ghost:     0.00%   0
          Most Frequently Used Ghost:   0.01%   3.13    k

        CACHE HITS BY DATA TYPE:
          Demand Data:                  0.05%   21.73   k
          Prefetch Data:                0.00%   952
          Demand Metadata:              99.94%  42.73   m
          Prefetch Metadata:            0.00%   1.31    k

        CACHE MISSES BY DATA TYPE:
          Demand Data:                  0.78%   690
          Prefetch Data:                0.87%   769
          Demand Metadata:              97.45%  86.38   k
          Prefetch Metadata:            0.90%   799

------------------------------------------------------------------------

L2ARC is disabled

------------------------------------------------------------------------

File-Level Prefetch:

DMU Efficiency:                                 15.04   m
        Hit Ratio:                      46.68%  7.02    m
        Miss Ratio:                     53.32%  8.02    m

------------------------------------------------------------------------

VDEV cache is disabled

------------------------------------------------------------------------

ZFS Tunables (sysctl):
        kern.maxusers                           2377
        vm.kmem_size                            33347727360
        vm.kmem_size_scale                      1
        vm.kmem_size_min                        0
        vm.kmem_size_max                        1319413950874
        vfs.zfs.trim.max_interval               1
        vfs.zfs.trim.timeout                    30
        vfs.zfs.trim.txg_delay                  32
        vfs.zfs.trim.enabled                    1
        vfs.zfs.vol.immediate_write_sz          32768
        vfs.zfs.vol.unmap_sync_enabled          0
        vfs.zfs.vol.unmap_enabled               1
        vfs.zfs.vol.recursive                   0
        vfs.zfs.vol.mode                        1
        vfs.zfs.version.zpl                     5
        vfs.zfs.version.spa                     5000
        vfs.zfs.version.acl                     1
        vfs.zfs.version.ioctl                   7
        vfs.zfs.debug                           0
        vfs.zfs.super_owner                     0
        vfs.zfs.immediate_write_sz              32768
        vfs.zfs.sync_pass_rewrite               2
        vfs.zfs.sync_pass_dont_compress         5
        vfs.zfs.sync_pass_deferred_free         2
        vfs.zfs.zio.dva_throttle_enabled        1
        vfs.zfs.zio.exclude_metadata            0
        vfs.zfs.zio.use_uma                     1
        vfs.zfs.zio.taskq_batch_pct             75
        vfs.zfs.zil_maxblocksize                131072
        vfs.zfs.zil_slog_bulk                   786432
        vfs.zfs.zil_nocacheflush                0
        vfs.zfs.zil_replay_disable              0
        vfs.zfs.cache_flush_disable             0
        vfs.zfs.standard_sm_blksz               131072
        vfs.zfs.dtl_sm_blksz                    4096
        vfs.zfs.min_auto_ashift                 9
        vfs.zfs.max_auto_ashift                 13
        vfs.zfs.vdev.trim_max_pending           10000
        vfs.zfs.vdev.bio_delete_disable         0
        vfs.zfs.vdev.bio_flush_disable          0
        vfs.zfs.vdev.def_queue_depth            32
        vfs.zfs.vdev.queue_depth_pct            1000
        vfs.zfs.vdev.write_gap_limit            4096
        vfs.zfs.vdev.read_gap_limit             32768
        vfs.zfs.vdev.aggregation_limit_non_rotating131072
        vfs.zfs.vdev.aggregation_limit          1048576
        vfs.zfs.vdev.initializing_max_active    1
        vfs.zfs.vdev.initializing_min_active    1
        vfs.zfs.vdev.removal_max_active         2
        vfs.zfs.vdev.removal_min_active         1
        vfs.zfs.vdev.trim_max_active            64
        vfs.zfs.vdev.trim_min_active            1
        vfs.zfs.vdev.scrub_max_active           2
        vfs.zfs.vdev.scrub_min_active           1
        vfs.zfs.vdev.async_write_max_active     10
        vfs.zfs.vdev.async_write_min_active     1
        vfs.zfs.vdev.async_read_max_active      3
        vfs.zfs.vdev.async_read_min_active      1
        vfs.zfs.vdev.sync_write_max_active      10
        vfs.zfs.vdev.sync_write_min_active      10
        vfs.zfs.vdev.sync_read_max_active       10
        vfs.zfs.vdev.sync_read_min_active       10
        vfs.zfs.vdev.max_active                 1000
        vfs.zfs.vdev.async_write_active_max_dirty_percent60
        vfs.zfs.vdev.async_write_active_min_dirty_percent30
        vfs.zfs.vdev.mirror.non_rotating_seek_inc1
        vfs.zfs.vdev.mirror.non_rotating_inc    0
        vfs.zfs.vdev.mirror.rotating_seek_offset1048576
        vfs.zfs.vdev.mirror.rotating_seek_inc   5
        vfs.zfs.vdev.mirror.rotating_inc        0
        vfs.zfs.vdev.trim_on_init               1
        vfs.zfs.vdev.cache.bshift               16
        vfs.zfs.vdev.cache.size                 0
        vfs.zfs.vdev.cache.max                  16384
        vfs.zfs.vdev.validate_skip              0
        vfs.zfs.vdev.max_ms_shift               34
        vfs.zfs.vdev.default_ms_shift           29
        vfs.zfs.vdev.max_ms_count_limit         131072
        vfs.zfs.vdev.min_ms_count               16
        vfs.zfs.vdev.default_ms_count           200
        vfs.zfs.txg.timeout                     5
        vfs.zfs.space_map_ibs                   14
        vfs.zfs.special_class_metadata_reserve_pct25
        vfs.zfs.user_indirect_is_special        1
        vfs.zfs.ddt_data_is_special             1
        vfs.zfs.spa_allocators                  4
        vfs.zfs.spa_min_slop                    134217728
        vfs.zfs.spa_slop_shift                  5
        vfs.zfs.spa_asize_inflation             24
        vfs.zfs.deadman_enabled                 1
        vfs.zfs.deadman_checktime_ms            5000
        vfs.zfs.deadman_synctime_ms             1000000
        vfs.zfs.debugflags                      0
        vfs.zfs.recover                         0
        vfs.zfs.spa_load_verify_data            1
        vfs.zfs.spa_load_verify_metadata        1
        vfs.zfs.spa_load_verify_maxinflight     10000
        vfs.zfs.max_missing_tvds_scan           0
        vfs.zfs.max_missing_tvds_cachefile      2
        vfs.zfs.max_missing_tvds                0
        vfs.zfs.spa_load_print_vdev_tree        0
        vfs.zfs.ccw_retry_interval              300
        vfs.zfs.check_hostid                    1
        vfs.zfs.multihost_fail_intervals        10
        vfs.zfs.multihost_import_intervals      20
        vfs.zfs.multihost_interval              1000
        vfs.zfs.mg_fragmentation_threshold      85
        vfs.zfs.mg_noalloc_threshold            0
        vfs.zfs.condense_pct                    200
        vfs.zfs.metaslab_sm_blksz               4096
        vfs.zfs.metaslab.bias_enabled           1
        vfs.zfs.metaslab.lba_weighting_enabled  1
        vfs.zfs.metaslab.fragmentation_factor_enabled1
        vfs.zfs.metaslab.preload_enabled        1
        vfs.zfs.metaslab.preload_limit          3
        vfs.zfs.metaslab.unload_delay           8
        vfs.zfs.metaslab.load_pct               50
        vfs.zfs.metaslab.min_alloc_size         33554432
        vfs.zfs.metaslab.df_free_pct            4
        vfs.zfs.metaslab.df_alloc_threshold     131072
        vfs.zfs.metaslab.debug_unload           0
        vfs.zfs.metaslab.debug_load             0
        vfs.zfs.metaslab.fragmentation_threshold70
        vfs.zfs.metaslab.force_ganging          16777217
        vfs.zfs.free_bpobj_enabled              1
        vfs.zfs.free_max_blocks                 -1
        vfs.zfs.zfs_scan_checkpoint_interval    7200
        vfs.zfs.zfs_scan_legacy                 0
        vfs.zfs.no_scrub_prefetch               0
        vfs.zfs.no_scrub_io                     0
        vfs.zfs.resilver_min_time_ms            3000
        vfs.zfs.free_min_time_ms                1000
        vfs.zfs.scan_min_time_ms                1000
        vfs.zfs.scan_idle                       50
        vfs.zfs.scrub_delay                     4
        vfs.zfs.resilver_delay                  2
        vfs.zfs.zfetch.array_rd_sz              1048576
        vfs.zfs.zfetch.max_idistance            67108864
        vfs.zfs.zfetch.max_distance             8388608
        vfs.zfs.zfetch.min_sec_reap             2
        vfs.zfs.zfetch.max_streams              8
        vfs.zfs.prefetch_disable                0
        vfs.zfs.delay_scale                     500000
        vfs.zfs.delay_min_dirty_percent         60
        vfs.zfs.dirty_data_sync_pct             20
        vfs.zfs.dirty_data_max_percent          10
        vfs.zfs.dirty_data_max_max              4294967296
        vfs.zfs.dirty_data_max                  3425001472
        vfs.zfs.max_recordsize                  1048576
        vfs.zfs.default_ibs                     17
        vfs.zfs.default_bs                      9
        vfs.zfs.send_holes_without_birth_time   1
        vfs.zfs.mdcomp_disable                  0
        vfs.zfs.per_txg_dirty_frees_percent     5
        vfs.zfs.nopwrite_enabled                1
        vfs.zfs.dedup.prefetch                  1
        vfs.zfs.dbuf_cache_lowater_pct          10
        vfs.zfs.dbuf_cache_hiwater_pct          10
        vfs.zfs.dbuf_metadata_cache_overflow    0
        vfs.zfs.dbuf_metadata_cache_shift       6
        vfs.zfs.dbuf_cache_shift                5
        vfs.zfs.dbuf_metadata_cache_max_bytes   504281024
        vfs.zfs.dbuf_cache_max_bytes            1008562048
        vfs.zfs.arc_min_prescient_prefetch_ms   6
        vfs.zfs.arc_min_prefetch_ms             1
        vfs.zfs.l2c_only_size                   0
        vfs.zfs.mfu_ghost_data_esize            100138496
        vfs.zfs.mfu_ghost_metadata_esize        75818496
        vfs.zfs.mfu_ghost_size                  175956992
        vfs.zfs.mfu_data_esize                  0
        vfs.zfs.mfu_metadata_esize              20131328
        vfs.zfs.mfu_size                        90954240
        vfs.zfs.mru_ghost_data_esize            571018752
        vfs.zfs.mru_ghost_metadata_esize        0
        vfs.zfs.mru_ghost_size                  571018752
        vfs.zfs.mru_data_esize                  15364954112
        vfs.zfs.mru_metadata_esize              381841408
        vfs.zfs.mru_size                        16638790144
        vfs.zfs.anon_data_esize                 0
        vfs.zfs.anon_metadata_esize             0
        vfs.zfs.anon_size                       471040
        vfs.zfs.l2arc_norw                      1
        vfs.zfs.l2arc_feed_again                1
        vfs.zfs.l2arc_noprefetch                1
        vfs.zfs.l2arc_feed_min_ms               200
        vfs.zfs.l2arc_feed_secs                 1
        vfs.zfs.l2arc_headroom                  2
        vfs.zfs.l2arc_write_boost               8388608
        vfs.zfs.l2arc_write_max                 8388608
        vfs.zfs.arc_meta_strategy               0
        vfs.zfs.arc_meta_limit                  4294967296
        vfs.zfs.arc_free_target                 173490
        vfs.zfs.arc_kmem_cache_reap_retry_ms    1000
        vfs.zfs.compressed_arc_enabled          1
        vfs.zfs.arc_grow_retry                  60
        vfs.zfs.arc_shrink_shift                7
        vfs.zfs.arc_average_blocksize           8192
        vfs.zfs.arc_no_grow_shift               5
        vfs.zfs.arc_min                         4034248192
        vfs.zfs.arc_max                         17179869184
        vfs.zfs.abd_chunk_size                  4096
        vfs.zfs.abd_scatter_enabled             1




 
Last edited:
What is your performance? You say "not great". Is it 1 MB/s? Or 10? Or 100?

You say you have 5 disk drives, and they are configured as RAID-Z2. That means in effect 3 data disks, and two redundancy / parity disks. I would expect a sequential large write performance from a local source of about 300 MB/s, plus or minus a factor of two either way. Why? Because typical disks can sustain about 100 MB/s, and you have 3 disks for data working in parallel. Why the factor of two? Because on the outermost tracks, working fully sequential, disks can actually run at up to 200 MB/s (roughly, the older 4TB models are probably more like 180 MB/s). On the other hand, the smallest amount of randomness or seeking will reduce the performance dreadfully.

That is all for the simple case when the data is coming to a single file, fully sequential, from a locally running process (no NFS involved). The moment you add complexities like NFS, multiple files, random seeks, interestingly complex metadata (directory structures), it can get very strange.

ARC has little to do with write performance. For write performance, what matters most is the (f)sync behavior of the application doing the writing. You don't need much memory to buffer writes to keep the disks completely busy. The burstyness is probably caused by some feedback effect on whatever program is doing the writing.
 
What is your performance? You say "not great". Is it 1 MB/s? Or 10? Or 100?

You say you have 5 disk drives, and they are configured as RAID-Z2. That means in effect 3 data disks, and two redundancy / parity disks. I would expect a sequential large write performance from a local source of about 300 MB/s, plus or minus a factor of two either way. Why? Because typical disks can sustain about 100 MB/s, and you have 3 disks for data working in parallel. Why the factor of two? Because on the outermost tracks, working fully sequential, disks can actually run at up to 200 MB/s (roughly, the older 4TB models are probably more like 180 MB/s). On the other hand, the smallest amount of randomness or seeking will reduce the performance dreadfully.

That is all for the simple case when the data is coming to a single file, fully sequential, from a locally running process (no NFS involved). The moment you add complexities like NFS, multiple files, random seeks, interestingly complex metadata (directory structures), it can get very strange.

ARC has little to do with write performance. For write performance, what matters most is the (f)sync behavior of the application doing the writing. You don't need much memory to buffer writes to keep the disks completely busy. The burstyness is probably caused by some feedback effect on whatever program is doing the writing.
Thanks for your time checking this!

I have now finished collecting the results of a sample data transfer:
45 GB of data was transferred during 104 minutes. Data comprises of 93k files in 17k folders.
This gives the througput of 57 Mbps (Megabit per second, if I am not messing the math here), I know there are many actors playing a role in this performance here, including the number of files & folder... I was expecting a bit more thoughput.

Of course the burst write on the server itself is not a problem per se, as long data is safely recorded in the platters.

Is it too early to affirm that I have a thoughput problem? Should I measure this in another ways? Am I missing any other important variable in this equation?

 
Another test: I selected a folder with bigger files: total 12.7GB with files average 1GB. It took only 8 minutes, transfer speed = 212 Mbps (megabits per second). Much better!!
 
Maths: from GB to Megabits: 12.7GB = 12.7 x 8 x 1000 = 101600 Megabits
Transfer speed = 101600 / 60 / 8 (min) = 212 Megabits per second
 

performance writing to server is not that great, I am running a batch file using robocopy from Windows 10 client (connected via cable, 1GB network).
I wouldn't be surprised if the issue is actually on the Windows side. Last time I tried NFS on Windows performance was abysmal. I'm sure you'd get better performance using Samba instead of NFS with Windows clients.
 
Most likely, the throughput bottleneck is not the file system. Your measured throughput is 212 Mbit/s or 26.5 MByte/s. That is far below what a typical ZFS file system (even with just 1 or 2 disks) can do.

If you really want to measure and tune your ZFS performance, you should run a local benchmark, not measure using a complex system (here a remote client, with network and a file transfer protocol like NFS) in the way. But I think in reality, you are not interested in improving your ZFS performance, but you are interested in tuning your overall system performance. And for that following SirDice's advice (of tuning Samba versus NFS) is a better starting point, since the bottleneck is unlikely to be ZFS.
 
Thanks a lot ralphbsz and SirDice for your good advice!
I will try SAMBA then! I have never succeded using SAMBA in FreeBSD to be honest This goes 9 years back, where I could only make NFS to work. This is why now I focused on NFS. I always deemed SMB to be very complex and cluttered. Anyways, since I am a power Windows user, it makes more sense this venue.

I will try replacing NFS with SAMBA!

Thanks a lot!
 
I always deemed SMB to be very complex and cluttered.
Samba is not that complicated to set up, you really only need a couple of things in the [global] section of smb4.conf and obviously one or more shares. Stick to the simplest configuration, a standalone server with user security. Create an account, make sure it's named the same as the username on the FreeBSD side but keep in mind these are separate accounts (each has its own password). Don't bother setting any TCP options, a "plain" Samba will run just fine on FreeBSD.
 
Still failing with SAMBA, this is what I am doing.

Installed the latest application using pkg install samba413

mount
Code:
/dev/nvd0p2 on / (ufs, local, journaled soft-updates)
devfs on /dev (devfs, local, multilabel)
tank on /tank (zfs, local, noatime, noexec, nfsv4acls)
tank/data1 on /tank/data1 (zfs, local, noatime, noexec, nfsv4acls)

/etc/rc.conf
Code:
hostname="moonlight"
ifconfig_igb0="DHCP"
#ifconfig_igb0_ipv6="inet6 accept_rtadv"
samba_server_enable="YES"
winbindd_enable="YES"
sshd_enable="YES"
ntpd_enable="YES"
powerd_enable="YES"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"

/usr/local/etc/smb4.conf.
Code:
[global]
workgroup = WORKGROUP
server string = Sambe Server Version 413
netbios name = moonlight
wins support = Yes
security = user
passdb backend = tdbsam

[data1]
path = /tank/data1
valid users = wheel
writable = yes
browsable = yes
read only = no
guest ok = yes
public = yes
create mask = 0666
directory mask = 0755

Added my user to samba
pdbedit -a acunha

I also added this from a hint I saw on the net:
# chmod 770 tank/data1

on /tank/data1, I run ls -l and below is a sample of the folder contents :
Code:
drwxr-xr-x    5 root  wheel            10 Aug  3 12:09 00.Adm
drwxr-xr-x   40 root  wheel            66 May  2  2015 Antonio
drwxr-xr-x    3 root  wheel             3 Oct  1  2017 AppData
drwxr-xr-x    6 root  wheel             6 Oct 30 13:49 Audio
drwxr-xr-x   15 root  wheel            18 Oct 30 14:12 Audio_MixPre3
-rwxr-xr-x    1 root  wheel       2036103 Sep 12  2008 BC425_620_Col62.pdf
-rwxr-xr-x    1 root  wheel      10535229 Sep 12  2008 BC427_EN_Col62.pdf

user "acunha" added to the "wheel" group!

I restarted the server and cannot make the samba share to appear in Windows when trying to mount using File Explorer > Map Network Drive: "\\192.168.1.178\tank\data1"
Windows client network setting is open for file share.

It would be very good to undestand why this doesn't work....

tks!
 
I have this error from file log.wb-MOONLIGHT in the folder /var/logs/samba4 :
Code:
[2021/01/05 14:52:27.162969,  0] ../../source3/rpc_server/rpc_ncacn_np.c:456(rpcint_dispatch)
  rpcint_dispatch: DCE/RPC fault in call lsarpc:32 - DCERPC_NCA_S_OP_RNG_ERROR

the files in the folder:
Code:
log.nmbd                log.wb-BUILTIN          log.winbindd
log.smbd                log.wb-MOONLIGHT        log.winbindd-idmap
 
net/samba412 is the default version for us, you should use that.

Code:
valid users = wheel
wheel is not a user, it's a group. Set it to your user:
Code:
valid users = acunha
Code:
create mask = 0666
Set mask to a more sensible 0644

Then make sure everything in /tank/data1 is owned by your user: chown -R acunha:acunha /tank/data1

I restarted the server and cannot make the samba share to appear in Windows when trying to mount using File Explorer > Map Network Drive: "\\192.168.1.178\tank\data1"
The share is named data1, so it's \\192.168.1.178\data1. The actual path on the server is irrelevant from the client's perspective.


Additional notes, you can remove the wins support (WINS is a horrid thing from the past, only useful for pre-Windows 2000 versions) and you don't need to have winbindd_enable="YES".
 
I have follows and updated all as suggested, thanks for that, then restarted the system.

Unfortunatelly Windows client still can't see the share:
Samba-error.png
 
SAMBA WORKS! 🎭

I searched a bit and manage to find the problem: the user needs to be preceeded by the server address during the mapping:
Samba-works.png


Thank you very much! I will now check the performance and post the results here!
 
Much better performance now:
sample: A folder 21.8GB and some large files (Outlook .pst) 317 files in total, 13 folders. Robocopy to the smb share in 3min30sec !!

Transfer speed now with SAMBA is 106 MByte/s (850 Mbps) that is way better than the NFS measured 26.4 MByte/s!

Thank you very much SirDice and ralpbsz!
 
Transfer speed now with SAMBA is 106 MByte/s (850 Mbps) that is way better than the NFS measured 26.4 MByte/s!
That smells like your bottleneck is now gigabit ethernet. Typical network utilization is 80-90% without extensive tuning.

Honestly, if this is good enough for your use, I would stop worrying now. Because the next step is going to be hard or expensive: upgrade network to 10gig; when done at scale (all machines on the network), that might be good money. To fully utilize just the network (completely ignoring client, server and protocol), you already need to tune things like zerocopy, RDMA and packet size (MTU). That's work. And then, your disk setup will only get you about 300 MByte/s (perhaps a little more, but more likely less), so putting in the cost of 10gig for that factor of at most 3 (most likely less) seems a bit silly.
 
I agree wityh you ralphbsz, the current transfer rate is great for my use. I don't need more than current Gigabit limits. I am very happy that SAMBA is working!

@facedebouc, interesting point! I didnt know that!
 
To get write speed over NFS with ZFS you _really_ need a ZIL (log) SSD device. It doesn't have to be big but the faster write speed the better...
 
Back
Top