Background: I've never used ZFS and I am much more comfortable with hardware-based RAID, based on years of practice.
I'm setting up some new fileservers. The primary use of these will be as Samba servers for backups of a few dozen systems (PCs with ShadowProtect and other FreeBSD boxes with rdump). That data will be pretty much write-only. I'll also be using them as general-purpose file-servers, so this part will be mostly reads. All of these systems are connected va Gigabit Ethernet.
I'll be using these to replace my existing units, which have 16 400GB drives on 2 8-port 3-Ware controllers (each controller has 7 drives in a RAID 5 set and a hot spare). I currently have 3 of these systems: a production server here, an off-site replication server connected via Gigabit Ethernet and using rdiff-backup for synchronization, and a hot spare here. Details here if anyone is interested.
The new systems will have 16 2TB drives on a single 3Ware 9650SE-16ML controller (with battery backup), 2 E5520 CPUs, and 48GB of RAM. There will also be an Ultra 320 SCSI card connecting the system to a 16-slot DLT-S4 autoloader (12.8TB uncompressed). They'll run FreeBSD 8.x, tracking 8-STABLE.
Obviously, I'll need to use a filesystem other than UFS, since chopping the array up into 2TB-sized chunks is impractical. It seems the answer for this is ZFS. But there are a huge number of configuration choices. Given my comfort level with the 3Ware products, my instinctive solution is to set up 15 of the drives as a single RAID 6 logical unit with the 16th drive being a hot spare, and then allocate something like 4 8TB partitions and format them with ZFS. I like the ability to do RAID rebuilds at the controller level - this is something that has been solid for many years. ZFS on FreeBSD is much newer with less of a track record.
However, ZFS apparently has knowledge of multiple spindles and may be able to do a better job of optimizing performance if I simply export all 16 drives individually and use ZFS to manage them.
The current fileservers have a smallish (60GB) slice which contains FreeBSD. I can continue to do this with the new servers, or I can install separate storage for the OS. This could be a pair of 2.5" SATA drives (cabled to the motherboard controller) in RAID 1, or a compact flash card (though I'd be concerned about card lifetime with the amount of writing being done for log files, etc.). I can stick with UFS2 for the system partitions or use ZFS. Given that ZFS boot support is quite new, and from what I've read the 8.0 distribution disk was cut before this feature was added, it seems that I should stay with UFS2 for the FreeBSD partitions.
The idea is to get the fastest I/O possible in day-to-day use. Other factors are the performance of the weekly tape backup job (a 2TB UFS2 filesystem takes a while to snapshot) and the nightly rdiff-backup job. I don't have to use rdiff-backup; however it has been doing a very good job. Fast, simple file access on the backup server is a must (so no special backup container formats).
Of course, I can try different configurations and benchmark them to try to find out what the best solution is, but I'd appreciate any advice from users here as to things to try or things to avoid.
I'm setting up some new fileservers. The primary use of these will be as Samba servers for backups of a few dozen systems (PCs with ShadowProtect and other FreeBSD boxes with rdump). That data will be pretty much write-only. I'll also be using them as general-purpose file-servers, so this part will be mostly reads. All of these systems are connected va Gigabit Ethernet.
I'll be using these to replace my existing units, which have 16 400GB drives on 2 8-port 3-Ware controllers (each controller has 7 drives in a RAID 5 set and a hot spare). I currently have 3 of these systems: a production server here, an off-site replication server connected via Gigabit Ethernet and using rdiff-backup for synchronization, and a hot spare here. Details here if anyone is interested.
The new systems will have 16 2TB drives on a single 3Ware 9650SE-16ML controller (with battery backup), 2 E5520 CPUs, and 48GB of RAM. There will also be an Ultra 320 SCSI card connecting the system to a 16-slot DLT-S4 autoloader (12.8TB uncompressed). They'll run FreeBSD 8.x, tracking 8-STABLE.
Obviously, I'll need to use a filesystem other than UFS, since chopping the array up into 2TB-sized chunks is impractical. It seems the answer for this is ZFS. But there are a huge number of configuration choices. Given my comfort level with the 3Ware products, my instinctive solution is to set up 15 of the drives as a single RAID 6 logical unit with the 16th drive being a hot spare, and then allocate something like 4 8TB partitions and format them with ZFS. I like the ability to do RAID rebuilds at the controller level - this is something that has been solid for many years. ZFS on FreeBSD is much newer with less of a track record.
However, ZFS apparently has knowledge of multiple spindles and may be able to do a better job of optimizing performance if I simply export all 16 drives individually and use ZFS to manage them.
The current fileservers have a smallish (60GB) slice which contains FreeBSD. I can continue to do this with the new servers, or I can install separate storage for the OS. This could be a pair of 2.5" SATA drives (cabled to the motherboard controller) in RAID 1, or a compact flash card (though I'd be concerned about card lifetime with the amount of writing being done for log files, etc.). I can stick with UFS2 for the system partitions or use ZFS. Given that ZFS boot support is quite new, and from what I've read the 8.0 distribution disk was cut before this feature was added, it seems that I should stay with UFS2 for the FreeBSD partitions.
The idea is to get the fastest I/O possible in day-to-day use. Other factors are the performance of the weekly tape backup job (a 2TB UFS2 filesystem takes a while to snapshot) and the nightly rdiff-backup job. I don't have to use rdiff-backup; however it has been doing a very good job. Fast, simple file access on the backup server is a must (so no special backup container formats).
Of course, I can try different configurations and benchmark them to try to find out what the best solution is, but I'd appreciate any advice from users here as to things to try or things to avoid.