ZFS ZFS iscsi (nfs is very slow)

nfs is very slow for my clients, I have 10G connectivity between server and clients.
I want to get a zvol created on zfs and want to get two initiators connected to this zvol via iscsi. I tried that and on one of the initiators I used fdisk to create the ext4 filesystem over it and mounted it, the other initiator after login to target mounted the same volume, but I can't have the files synchronized on it i.e. when one initiator creates the file on the volume the other can't see it unless it umount and mount that again. How can I have the same volume mounted on both the targets and can have the synchronized data over it ? Is it even possible ?

Thanks in advance for the help.
 
correct, it is not possible. this would be like having the same hard drive plugged into two computers, they will fight over allocated space and overwrite each others' content. sorry.
 
You could write a special filesystem for that case...

Would be based on the NFS code.

Overall it might be better to debug the NFS performance problem.
 
You could write a special filesystem for that case...
Typically, these things are called cluster file systems, or shared disk file systems. They are not uncommon. But they tend to be complex, and optimized for large installations. Oracle has open-sourced the Oracle Cluster File System OCFS2, but I don't know whether it is being actively maintained. RedHat used to have GFS (the global file system, not to be confused with GFS = Google file system which is not open source). In the BSD area, Matt Dillon's Hammer2 does clustering (although I don't know whether it was ever coded to completion). To be honest, I think none of the open-source options are functional. A better solution might theoretically be to go directly to a cluster file system, where access to the raw block device is performed by a cluster host; I think both Ceph and Lustre are freely downloadable, but getting them set up would be a major task.

Would be based on the NFS code.
That sort of describes Panasas' PanFS, which is to a large extent built around NFS. Although the flow here goes the other way: A lot of NFSv4 was built by starting from PanFS. To set a scale: At its peak, Panasas had about 60-80 engineers working on building this system. Not something an amateur can knock out over a weekend.
 
Can you suggest what could be done ? how to monitor, how to test, what would you do if you are in my shoes ?

You should start by benchmarking the filesystem both local and over NFS. I recommend my version of bonnie:

You should use the -s <GB> parameter to select a benchmark size 1.5x your RAM.

Network benchmarking might also be indicated.
 
Also, this isn't clear to me: do you already have single-client iSCSI running? Then you should benchmark that, too.
 
well , you cant share a ZFS VOLUME, but you can share a ZFS filesystem.
If you run ZFS vol based ISCSI LUNS you are not using NFS.

to share a ZFS FILESYSTEM you need to share the filsystem with "zfs share" with sharenfs properties.
then the FILESYSTEM ( not a volume ) would be remotely mountable with NFS.
then all the traditional NFS managment and issues would apply.
 
Speed issues with with storage devices is usually dependent on Buffer sizes along the path of the data transfer ,
if you are reading 128K on the disk every time you want 8K data to the APP, thats going to hit you in latency.
If you are using virtualization , try to make the application client read the data from the remote provider as directly as possible and
not have to traverse multiple virtualization layers to get to the data.
 
Back
Top