increase zfs send/receive throughput from source to destination

Moving around terabytes demands time.

What methods are available to implement hash based packet port load balancing to round robin in sequence communication allowing two hosts to realize greater than gigabit speeds?

Using several intel gigabit nics I would like to increase transfer speed for zfs sending/receiving from one machine to another.

I have tested lacp active aggregated dual nics between two opensolaris b134 machines connected by a hp procurve 2810-48g switch with lacp and flow control enabled on the aggregated links. Using mbuffer zfs send/receive transfer speeds remain constrained at 105MBytes/sec. I will now be conducting tests with FreeBSD.

Thanks for any help.
 
Btw, if involving a switch is problematic I may allocate 2+ nics on each machine to be directly connected to each other.
 
Are you sure this is a zfs issue? You seem to be talking about link aggregation. The lagg interface may be what you are looking for. That offers link aggregation and failover.
 
zfs send is a single-threaded transfer, using a single TCP connection between two hosts. IOW, it's limited to a single NIC. Period.

LACP and similar work at the connection level. You cannot go above 1 Gbps for a single connection. However, you can run more than 1 connection at a time, for a combined (aggregate) throughput above 1 Gbps.

LACP and similar will not help you with zfs send.

The only way to make 1 connection go above 1 Gbps is to use a 10 Gbps ethernet NIC.
 
davidgurvich, aggregation and failover is successfully functioning and active.

phoenix, I assumed the possibility of implementing additional layers of abstraction to multipath the single tcp connection over several links simultaneously.

I found someone claiming such an ability here: http://breden.org.uk/2008/04/05/home-fileserver-trunking/#comment-17124

What purpose does IPMP serve in my scenario? From what I have read am I correct that it would allow multiple aggregation groups each with a logical ip address and allowing for creation of vnics traversing the lagg? But again, would individual tcp sessions be limited to one nic?

What is the purpose of enabling flow control on the nics and switch?
 
The trunking mentioned in that post is switch-to-switch trunking, and has nothing to do with host-to-host trunking.
 
I appreciate the responses and hope I am not beating a dead horse..

If the zfs machine was connected thru 10gbe to a switch is there a method that other hosts connected to that switch thru several 1gbe nics could achieve greater than 1gbe? Or 10gbe needs to be end to end?
 
No matter how many gigabit NICs you trunk together, a single TCP connection cannot go above 1 Gbps, as it will be restricted to a single NIC. But, you can run multiple TCP connections (1 per NIC) to give a total aggregate bandwidth over 1 Gbps.

If you have 10 client systems all with gigabit NICs, connected to a switch, connected to a server with a 10 Gbps NIC, then you can max out the 10 Gbps connection.
 
Back
Top