Setting up system with 3 or 4 drives and also using ZFS?

I currently have a 1TB drive that I use as the system drive, and a 2TB drive I use for data and backups. I would like to add two more 2TB drives and run them as 3x2TB in ZFS so I can protect my data in case of a drive failure.

I'm just curious what the best way to handle this kind of setup would be.

Do I just set up the system as if there was only the 1TB drive, then add the 3x2TB drives and use it as a location where I can store my data, as well as add backups of the 1TB drive?

Being able to keep the 1TB system drive backed up would just be a plus.

Would it be better to just run 3x2TB and ditch the 1TB drive? I keep hearing about using ZFS on the root partition and don't understand if it's a bad thing or not.

Any thoughts?
 
You can always do it like that with small UFS / (1GB) and all the rest in ZFS pools:
http://forums.freebsd.org/showthread.php?t=12082

Generally its no matter if it would be 1TB + RAIDZ(3 * 2TB) or only the RAIDZ(3 * 2TB) drives, 1TB as a system disk is quite big, the FreeBSD system even with all X11 stuff will quite rarely hit 10GB used space.

Get 8.2-STABLE instead of 8.2-RELEASE so You will have latest ZFS v28 with dedup=on ;)
 
Hi,

Some things to get you started. Personally, I´ve read it all like, a gazillion times. Things like these are good to have as a reference:

First off, from the good book
Then the wiki
And ultimately, the admin guide. Keep it under your pillow=)

Also, like vermaden said, you can make buildworld from 8.2-RELEASE up to 8.2-STABLE, but requires some tinkering. Might be overwhelming starting out with but there is excellent documentation about it from the handbook here

Good luck!

/Sebulon
 
ZFS for the root is good.

You may consider to exchange the 1TB drive for a 2TB drive and then, by having 4x 2TB drives, you could do either:

1. 2 mirrors of 2TB drives (4TB total usable)
2. raidz1 of 4x2TB drives (6TB total usable)
3. raidz2 of 4x2TB drives (4TB total usable)

Option 1) would give you best random I/O operation and will sustain two drive failures, before you start losing data. Both drives have to be from different pairs however.
Option 2) provides you with most storage and allows for only one failed drive before you start losing data.
Option 3) provides you with the same amount of storage as with mirrors, but you can lose any two drives before you start losing data.

The primary difference between mirrors and raidz is the write speed and especially the IOPs the pool can handle. For storing large files (backup) it does not matter and raidz may be faster. For reading it will be the same, as ZFS unlike most other RAID technologies reads from all drives at the same time.

If you go this route, you could setup your drives in partitions, using GPT, containing:

p1 - the boot code;
p2 - swap
p3 - zfs data

This way, you could boot from any of the drives, have shared swap - no need to really mirror that up, it will just be much faster than otherwise and you can replace drives.

You can of course set up ZFS on the boot drive separately from the data drives (two zpools). Using ZFS even in non-redundant setup lets you know when your drive is going awry and if your sata is intact (the OS in this case).

I would go for the redundant all-ZFS setup.
 
Thanks for all the information guys.

vermaden, your guide was already bookmarked :p

Anyway, an afternoon of reading has left me a little confused about what drives I should be buying.

The 2TB drive I've got is a WD20EARS, which is a 4k drive that has 512b emulation. I see you can use gnop to get around this issue, but apparently the WD20EARS has other issues with trying to go to sleep every 8 seconds (head parking? I didn't really understand what the issue was though).

So right now I'm trying to decide if it would make more sense to buy two additional WD20EARS drives and use the gnop solution, or if I should look at buying some older 512b sector drives, such as the Hitachi 5K3000 2TB drives.
 
bigtoque said:
The 2TB drive I've got is a WD20EARS, which is a 4k drive that has 512b emulation. I see you can use gnop to get around this issue, but apparently the WD20EARS has other issues with trying to go to sleep every 8 seconds (head parking? I didn't really understand what the issue was though).
The ZFS by default would write data on the drives with period of 30 seconds, that means that after 8 seconds the head would be parked and after another 22 seconds it will be 'called' for wake up to actually write the data. It has two implications, first, You will hit very soon the manufacturers MAX HEAD PARK COUNT which will make this drive closer to death and the other problem, whole system will wait every 30 seconds for the heads to 'get back to work' which is also not so instant thing. IMHO sell that single drive on EBAY or other site like that and buy GOOD drives. I for example use Seagate Barracuda LP 2TB, they use REAL 512B blocks, but they are 'green/low power' drives, so check WD Black/Blue for more performance, also the latest SEAGATE drives seem to have 'automatic align mechanism' for 512B / 4K transition, so no magic with gnop/ashift=12 is needed on them.
 
@bigtoque

"(head parking? I didn't really understand what the issue was though)"

I can give you a practical example. 2011 02 25 I bought a WD30EZRS and then two weeks later, a SAMSUNG HD103SJ. They have been active 24/7 in my NAS since then and the Load_Cycle_Count on them is today
Code:
HD103SJ:  89
WD30EZRS: 21561

Now, that is a huge difference to how any other hard drive behaves, but they are rated for over 300 000 cycles, so given this data, it should be ok for about 6 years or so. Time will tell.

/Sebulon
 
While Seagate drives do have this 'smart align', performance is not that great when the writes are not 4k aligned. For any 4k sector drive it is best to align it with gnop to 4k boundary (thus setting ashift=12) for optimum performance.
 
Back
Top