ZFS Files corrupted with ZFS and SAMBA

Thanks again to everyone for your help.
Please correct me if I'm wrong: If I want to automate snapshot creation and zfs send (using a cronjob) to another server on the local network, I don't need to set up NFS shares (or FTP or whatever) on the target computer that will receive the snapshots since I can just use SSH for transferring backups?
So the FreeNAS machine on which I'll store the snapshots (the first one being a full backup of the zpool and the subsequent times this will only be an incremental backup from the very latest snapshot from what I understand) will only require OpenSSH server (well that's something that virtually already needs to be installed on a server anyway)? Is there a version of FreeBSD similar to the debian netinstall iso, which let you choose what services you want to install (I usually only select SSH server + system utilities and install manually other packages depending on the server's roles).
Do you think the fact that FreeBSD 10.1 will be shipped with a different (more recent) version of ZFS that the one installed on the Samba server might cause any issue? I suppose it's backward compatible, but if so, will I be able to take advantages of the new features implemented in the newest version of ZFS?
I've seen quite a few articles that recommend using SSDs for caching in order to improve performance of the file system. Since 1 GB of RAM is on the low end, is it a good idea to use SSDs for L2ARC (read cache, so it doesn't really matter if the SSDs die, data will just be read from the HDD no matter what) and ZIL (write cache for logs, so to be on the safe side I should use a mirror of at least 2 SSDs) cache? It will be more beneficial since the hardware is really old and is expected to perform poorly?
, as mentioned earlier, the server has a RAID controller, so I must not use RAID 5 for instance, but only JBOD or "HBA mode" otherwise ZFS will not have access to raw drives and therefore will not be able to use its specific mechanisms that make it so powerful?

Last but not least, what are the differences between the zfs send commands with a pipe and those with a redirect standard output?
I know for exmaple what ls -ltr /home/user | grep script or find / -name "*.py" > Python_Scripts.txt do (well those are relatively basic commands, but I can't yet give an example using awk without copying/pasting it from google :p ) , but for zfs send , is it basically the same result that they allow to achieve with a different manner?
 
Thanks again to everyone for your help.
Please correct me if I'm wrong: If I want to automate snapshot creation and zfs send (using a cronjob) to another server on the local network, I don't need to set up NFS shares (or FTP or whatever) on the target computer that will receive the snapshots since I can just use SSH for transferring backups?

You could do something like this

zfs send tank/myfs@now | ssh host zfs receive tank/backup/myfs

http://docs.oracle.com/cd/E19253-01/819-5461/gbinw/

Do you think the fact that FreeBSD 10.1 will be shipped with a different (more recent) version of ZFS that the one installed on the Samba server might cause any issue? I suppose it's backward compatible, but if so, will I be able to take advantages of the new features implemented in the newest version of ZFS?

It's backward compatible with older zpool version but it'll warn if you need to upgrade zpool. I believe it will since you're using older version.

You can issue the commands:
zfs upgrade
zpool upgrade mypool

If you're doing a fresh install of FreeBSD 10.1 then you only need to upgrade zpool after you import it as newer ZFS boot code will be already present. If you're upgrading FreeBSD from 8.x, 9.x to 10.x then you'll need to upgrade zfs, zpool and update FreeBSD boot code.

Warning: you will need to update your FreeBSD boot code using gpart for it to work with newer zpool version.

Warning: newer zpool version is not compatible with other operating systems such as Solaris. FreeBSD decided to develop its own ZFS since Oracle's ZFS codes are no longer open-sourced. Oracle stopped sharing the code base few years ago so FreeBSD team is improving its own ZFS code base.

I've seen quite a few articles that recommend using SSDs for caching in order to improve performance of the file system. Since 1 GB of RAM is on the low end, is it a good idea to use SSDs for L2ARC (read cache, so it doesn't really matter if the SSDs die, data will just be read from the HDD no matter what) and ZIL (write cache for logs, so to be on the safe side I should use a mirror of at least 2 SSDs) cache? It will be more beneficial since the hardware is really old and is expected to perform poorly?, as mentioned earlier, the server has a RAID controller, so I must not use RAID 5 for instance, but only JBOD or "HBA mode" otherwise ZFS will not have access to raw drives and therefore will not be able to use its specific mechanisms that make it so powerful?

I don't think it'll make much difference in performance whether you use SSD or HDD with an old computer. It'll be slow either way. Cache may help but not much.

Last but not least, what are the differences between the zfs send commands with a pipe and those with a redirect standard output?

Use zfs send and zfs receive to only transfer dataset and data files from one FreeBSD server to another FreeBSD server with ZFS. That way, both pools are identical on both servers. You can access files on both servers.

Use zfs send with redirect standard output is only file itself. You won't be able to read anything from it until you issue zfs receive the file.

I know for exmaple what ls -ltr /home/user | grep script or find / -name "*.py" > Python_Scripts.txt do (well those are relatively basic commands, but I can't yet give an example using awk without copying/pasting it from google :p ) , but for zfs send , is it basically the same result that they allow to achieve with a different manner?

Can't comment since I'm not a Python expert. :p
 
Back
Top