ZFS SMB 3x faster than ISCSI

Hello,
I am new to the BSD world (came from CentOS and never looked back) and I have clean machine and I made a zpool of 3 disks with raidz (raid5).
I have 4 disks. One SSD for OS (not pool just regular installation without involving ZFS), 3 HDD on raidz pool (dataset).

I made few jails one of them is samba daemon jail for NAS which has nullfs(5) accessibility to the pool.
Now the samba performance is 70MBps+- read and 100MBps write.

I decided that I want some of the pool to be zvol so I can do iSCSI with it.
I created zvol on the pool I use for samba and did basic config on /etc/ctl.conf
I connected the LUN on the same PC I got the samba performance I noted below and its speeds is like 25MBps read, 40MBps write.

zvol props:
Code:
vol0  type                  volume                 -
vol0  creation              Tue Jan  7 20:39 2020  -
vol0  used                  103G                   -
vol0  available             227G                   -
vol0  referenced            73.0G                  -
vol0  compressratio         1.00x                  -
vol0  reservation           none                   default
vol0  volsize               100G                   local
vol0  volblocksize          8K                     default
vol0  checksum              on                     default
vol0  compression           off                    local
vol0  readonly              off                    default
vol0  createtxg             3486238                -
vol0  copies                1                      default
vol0  refreservation        103G                   local
vol0  guid                  10658952942402877800   -
vol0  primarycache          all                    default
vol0  secondarycache        all                    default
vol0  usedbysnapshots       0                      -
vol0  usedbydataset         73.0G                  -
vol0  usedbychildren        0                      -
vol0  usedbyrefreservation  30.1G                  -
vol0  logbias               throughput             local
vol0  dedup                 off                    default
vol0  mlslabel                                     -
vol0  sync                  standard               local
vol0  refcompressratio      1.00x                  -
vol0  written               73.0G                  -
vol0  logicalused           54.6G                  -
vol0  logicalreferenced     54.6G                  -
vol0  volmode               dev                    local
vol0  snapshot_limit        none                   default
vol0  snapshot_count        none                   default
vol0  redundant_metadata    all                    default
 
I created zvol on the pool I use for samba and did basic config on /etc/ctl.conf
I connected the LUN on the same PC I got the samba performance I noted below and its speeds is like 25MBps read, 40MBps write.
You added several layers between Samba and the actual data instead of letting Samba access it directly from the filesystem. What were you expecting to happen?

1) Samba -> filesystem -> disk
2) Samba -> filesystem -> (virtual) disk -> iSCSI -> zvol -> disk
 
You added several layers between Samba and the actual data instead of letting Samba access it directly from the filesystem. What were you expecting to happen?

1) Samba -> filesystem -> disk
2) Samba -> filesystem -> (virtual) disk -> iSCSI -> zvol -> disk
you got it wrong my bad i should describe more what i want.
first the actual layers are:
1) Samba -> filesystem(x pool) -> disk
2) iSCSI -> zvol -> filesystem(x pool) -> disk

and i get slower speeds on the iscsi connection to the zvol compared to samba..
i wanted to know why and how it can be fixed because i heared iscsi suppose to be faster

how can i improve iscsi speeds to at least match the samba speeds?
 
I had/have quite actually similar results when comparing NFS versus iSCSI (did not hosted CIFS).

Check my settings from here:

For the record, I did not used Jumbo Frames (current VLAN network requirements - not my decision).
 
iSCSI is encapsulated inside the TCP and the PDU of it will be smaller compared to the Server message block where you can utilize the entire data size of the packet also there's a compression support for SMB which will increase the data transfer. There's an example of VM (Hyper-V) hosted on SMB share and iSCSI you can read more about it here: https://docs.microsoft.com/en-us/archive/blogs/larryexchange/iscsi-or-smb-direct-which-one-is-better

1578520513799.png
 
icsci works on blocks & nfs on files. It makes interesting comparisons.
You should do the same test with storage on another machine and connection with a fiberchannel switch. :) Your results could be different.
Sequential writes and random writes can have different results. Or file or directory operations.
 
@Alain De Vos

don't get things wrong iSCSI is different from Fiber Channel data transfer protocol that lets through SCSI commands. iSCSI is simply putting SCSI commands TCP/IP. Alto there's is Fiber Channel over Ethernet FCoE. So you can't compare those two.

icsci works on blocks & nfs on files. It makes interesting comparisons.
You should do the same test with storage on another machine and connection with a fiberchannel switch. :) Your results could be different.
Sequential writes and random writes can have different results. Or file or directory operations.
iSCSI is encapsulated inside the TCP and the PDU of it will be smaller compared to the Server message block where you can utilize the entire data size of the packet also there's a compression support for SMB which will increase the data transfer. There's an example of VM (Hyper-V) hosted on SMB share and iSCSI you can read more about it here: https://docs.microsoft.com/en-us/archive/blogs/larryexchange/iscsi-or-smb-direct-which-one-is-better

View attachment 7380
That is my 'problem' here, I only tested NFS and iSCSI with ZFS zvol ... :)
guys i succeed to almost match the performace of ISCSI to SMB
first just to let you know my configs are the simplest (taken from the handbook) u can think of i am not server admin or something..

i found that when i create file with SMB the size on disk is 1MB atleast so i checked furthermore and mt pool's recordsize is set to 128k and that my zvol set to 8k.. so i checked with attobenchmark.exe speeds of the blocksizes and found that it was the bottleneck so i recreated the zvol with volblocksize=128k (which is the max it allowed me compared to 1mb on smb which runs on the pool itself) and formatted it on windows with 128k and now the speed much more closer and are 80MB+ read and write.

i also fine tuned zvol thanks to varmaden post
 
Back
Top