NAS using ZFS with FreeBSD, run by a Solaris SysAdmin

About ZFS and RAM ...

I am using ZFS on my home storage box with Intel T8100 CPU and 965GM MiniITX motherboard along with 1GB of RAM. I have 2 x 2TB Seagate Low Power drives put together in ZFS mirror for storage purposes, everything under control of 64bit FreeBSD 8.2-STABLE (amd64). I share that 2TB ZFS pool over SAMBA/NFS protocols to the local LAN/WLAN and even use that box as a server (converting various video formats using FFMPEG and so) ... and everything is stable as rock, You definitely do not need a lot RAM to use ZFS with FreeBSD, also do not ave any 'manual' limits set in /boot/loader.conf, only modules loading:

Code:
$ [b]cat /boot/loader.conf [/b]
ahci_load=YES
zfs_load=YES
aio_load=YES
coretemp_load=YES

... and ...

Code:
$ [B]uptime[/B]
 2:39PM  up 215 days,  5:37, 4 users, load averages: 0.07, 0.03, 0.01

But its true that the more the RAM the more ZFS shines ;)
 
In the IDE-CF adapter, we are using 2 GB Transcend CF disks:
Code:
ad0: DMA limited to UDMA33, device found non-ATA66 cable
ad0: 1911MB <TRANSCEND 20080128> at ata0-master UDMA33 
ad1: DMA limited to UDMA33, device found non-ATA66 cable
ad1: 1911MB <TRANSCEND 20080128> at ata0-slave UDMA33
Not sure what model of IDE-CF adapater is in the box. I'd have to open it to find out, and that's a little hard to do with this box. I believe it's a StarTech, though,

Hrm, looking through /boot/loader.conf on this system, it appears that DMA is now working. Guess something changed between FreeBSD 7.0 (what was originally installed) and FreeBSD 8.2-STABLE (what's currently running). It's still limited to UDMA33 speeds, though.

In the SATA-CF adapter, we are using 4 GB Kingston Elite Pro CF cards:
Code:
ad4: 3847MB <ELITE PRO CF CARD 4GB Ver2.21K> at ata2-master PIO4 SATA 1.5Gb/s
ad6: 3847MB <ELITE PRO CF CARD 4GB Ver2.21K> at ata3-master PIO4 SATA 1.5Gb/s

These are plugged into StarTech SATA2CF adapters. Looking at /boot/loader.conf on this server, DMA is disabled via
Code:
hw.ata.ata_dma="0"
which is the opposite of what I thought.

These two servers using CF disks are being retired (one has already been retired, the other is one it's last month of usage). The new storage servers use SSDs. Much nicer to work with. :)
 
One more significant difference between the solaris and FreeBSD implementation is that you can add spares, but those are not hot spares like solaris.
Human intervention is needed to replace a faulted drive.

FreeBSD accepts the spare without any comments so if you come from solaris, it looks the same, but the inner workings do not..
The spare is coldplay.

Regards,
Johan Hendriks
 
vermaden said:
I share that 2TB ZFS pool over SAMBA/NFS protocols to the local LAN/WLAN and even use that box as a server (converting various video formats using FFMPEG and so) ... and everything is stable as rock
Wow that's great! Like I think I mentioned, video streaming is about the most I'd be doing in terms of heavy-load-where-i-would-care how long it takes, so that is good to hear.
phoenix said:
Hrm, looking through /boot/loader.conf on this system, it appears that DMA is now working. Guess something changed between FreeBSD 7.0 (what was originally installed) and FreeBSD 8.2-STABLE (what's currently running). It's still limited to UDMA33 speeds, though.
That is also good news]These two servers using CF disks are being retired (one has already been retired, the other is one it's last month of usage). The new storage servers use SSDs. Much nicer to work with. [/quote]
SSDs were actually what I originally wanted for rootdisk, thinking I could put some ZFS cache on there too (I've actually never done a separate cache vdev before, but that's just because again, most of the servers i deal with have a TON of RAM), but then I realized how much they were and that I'd probably be better buying two CF cards.
Sylhouette said:
One more significant difference between the solaris and FreeBSD implementation is that you can add spares, but those are not hot spares like solaris.
Human intervention is needed to replace a faulted drive.
Hmm that stinks...I wonder why that is...I wouldn't think it would take much logic to get that (If an array is degraded AND there is a spare available, replace faulted drive with spare...seems simple enough to me! (In reality I'm sure slightly more complex, but still...)); maybe I should write a cronjob that just polls the zpool every minute to see if it's OK, and if not, do the replacement. :)

In other news, although I'm always full of questions, I think it's about time I stop screwing around in a virtual environment and just take the plunge. I'm now fairly confident (thanks to all of your feedback) that my old hardware will do the trick! So hopefully tonight will be ordering the necessary parts to bring it up to speed, and obviously the drives!
 
ctengel said:
In other news, although I'm always full of questions, I think it's about time I stop screwing around in a virtual environment and just take the plunge. I'm now fairly confident (thanks to all of your feedback) that my old hardware will do the trick! So hopefully tonight will be ordering the necessary parts to bring it up to speed, and obviously the drives!

ZFS itself doesn't handle hot-spares, even in Solaris. All ZFS does is send notifications of dead drives to the OS. What the OS does with that notification ...

On Solaris, you have FMD. That's what watches for the dead drive notifications, then initiates the "zpool replace" using the configured spare drive.

On FreeBSD, we have devd() which gets the notification of the dead drive. But, we have nothing in place to actually initiate the "zpool replace". At least, nothing official. There are various shell scripts floating around that can be plugged into devd.conf(5) to do this, but they're not exactly bulletproof.
 
So after getting sidetracked by something completely unrelated, I'm back to work on this, and turns out, despite the fact I swore up and down that "Yes I have a 64 bit processor," it turns out that I do not. (No wonder my motherboard only supported so little RAM!) So now I'm back to square one again, and I'm thinking I really do need to buy some new motherboard/CPU/RAM. (While I've definetely had some helpful advice from this thread that maybe I don't need that much RAM, the consensus seems to be that for ZFS, 64-bit is a must. Although if anyone has any experience with 32 bit, I might be convinced otherwise...) And this also probably means I'll be using a PCI-e SATA card (maybe only one needed!), SATA CF adapter (instead of IDE), etc.
 
It may be worth checking that 64 bit isn't simply disabled in your BIOS. Look for any options concerning 64 bit support or "long mode" in the BIOS.

Worth a shot if you haven't looked in there already, a lot of 64 bit hardware shipped with 64 bit mode disabled in the BIOS by default.
 
If you want to do any kind of heavy lifting with ZFS (lots of NFS shares, lots of Samba shares, several TB of disk space, compression, dedupe, L2ARC, etc) then you will want to use 64-bit FreeBSD, as 4 GB of RAM just won't be enough. :)

If you only have a couple TB of disk space, light compression, no dedupe, only a few clients accessing the pool, then you can get away with 32-bit FreeBSD. I use 32-bit FreeBSD at home with only 2 GB of RAM, but there's only 1 TB of disk space in the pool (2x 500 GB mirror vdevs), 4 GB L2ARC, no dedupe, lzjb compression, and only 3 clients accessing the pool at any one time. Every few weeks I have to reboot the box as it runs out of RAM or mbufs or something and locks up. I'm contemplating migrating it to 64-bit FreeBSD just to get a larger kmem space.
 
Definitely not just the BIOS. I just was going over my purchase records and realized it was an AMD Athlon XP. I believe "Thoroughbred" class. For some reason I could have sworn it was 64 bit, but I believe it is 32 bit.

I am not really planning on much heavy lifting, but am targeting about 5 TB of storage total in mirrored pairs. I might be able to get a 64 bit system soon with about 8-16 GB RAM.
 
After all that it looks like I will be getting new hardware:
  • AMD FX-4100 Zambezi 3.6GHz (3.8GHz Turbo) Socket AM3+ 95W Quad-Core
  • ASUS M5A97 AM3+ AMD 970 SATA 6Gb/s USB 3.0 ATX
  • CORSAIR XMS3 8GB (2 x 4GB) DDR3 1333
  • SYBA SD-ADA40001 SATA II To Compact Flash
  • ASUS 8400GS-512MD3-SL GeForce 8400 GS 512MB 32-bit DDR3 PCI Express 2.0 x16

Turns out it was about $100 USD more to build with all new hardware. I thought I must have been missing something, but turns out DDR3 is alot cheaper than DDR1, and I was able to save on the need for SATA controller cards.

I will be getting a much faster system and performance I don't think will at all be an issue. What I am trying to determine for sure is whether the AMD SB950 SATA controller works with FreeBSD, if the UEFI BIOS can boot FreeBSD, and if using this SATA CF adapter is as straightforward as a IDE CF adapter.

Thanks once again for everyone's input on the old setup; I'm really looking forward to becoming a regular FreeBSD user!
 
I've been using rat slow Kingston 4GB usb drives mirrored with gmirror for over a year now for my ZFS server. No problems at all.
 
Back
Top