Other iSCSI transfer: 12 TB job, 24 TB down, 10 TB up. How's that work?

I set up a 16 TB zfs pool (vdevs are all 2x mirrors) to create a "life preserver" array for a colleague who needed iSCSI storage for his 12TB life's work online in a matter of weeks. I see there are different phases to an iSCSI transfer, but I don't see how anything the target would normally send would come anywhere close to 10 TB up (that's almost the same as the dataset!). Especially since these are mainly 500 MB image files, there shouldn't be much overhead per file. This seems rather up-heavy to me, compared to my limited experience, but that's all been unix-to-unix, and never iSCSI. So I really have 3 unknowns:

1) iSCSI - is there something inherently busy about the protocol?
2) the iSCSI initiator is a Windows 2003 Server. Could it be NTFS that's using this much?
3) Within Windows, I'm using robocopy. I know this is a FreeBSD forum, but I'm guessing more than a few folks providing ZFS services have experience with a heterogenous network, including Windows servers. I'm only checking for file exists, file size, and write date, but could my robocopy options be the problem?
 
Not exactly sure if it's a problem but iSCSI basically just provides a disk image, any read or write action would generate some network traffic just as it would create data on the databus to a 'regular' disk. The only real difference is that the "databus" is now being transferred over TCP/IP. I'm not sure if you can turn it off on NTFS but things like access times might be updated each time a file is accessed. That would certainly generate a lot of network traffic.
 
Thanks! Yeah, we're upgrading to Win 2012 for the new app server. Just trying to stay above water right now!
 
Back
Top