Question about ZFS/Raidz

Blu said:
Ok that sounds perfect, so I would be able to add 10 more drives as raidz2 to the zpool and have it show up as two networked drives worth of 16TB Space each(In windows under 'my computer -networked drives-') Right?

No, ZFS is more advanced than that. You will have a pool of 32TB (less actually as a 2TB drive is really going to be about 1.8TB). This 32TB can be configured as a single volume, or as many smaller volumes as you require. Check the ZFS documention...

ta Andy.
 
Blu said:
Another odd question, does Raidz show me the drives serial number to make it easier to keep track of the drives? Or am I going to have to say add the hard drives one by one to keep track of which is which and label them in free BSD as I do this?

Seriously? Are you even reading this thread anymore? Or just posting the same question over and over? This is the third time you've asked the *exact* same question. The answer has not changed.

YOU LABEL THE DRIVES BEFORE YOU USE THEM WITH ZFS!!

I know some windows programs will show you the drives serial # and I figured this would be easy to keep track of them. To write down the serial on a piece of paper and tape it to the enclosure the drive is in and such to make it easier to know what drive is what when I use glabel to mark them in the zpool.

Or, you connect the first drive, use glabel to label it using a nice name like "slot-01". Then connect the second drive, use glabel to label it using a nice name like "slot-02". Repeat until all the drives are labeled. Use whatever labeling scheme makes sense for your hardware layout.

THEN you create the raidz2 vdev using the label devices (/dev/label/slot-01), not the physical devices (/dev/da0).

That way, when you run zpool status, it shows the labels. Thus, if there's a drive marked as "OFFLINE" or "DEGRAGED", you just look at the label shown in the zpool output, and you know which drive to replace (label/slot-01 ah, the drive in slot-01).

What is so hard to understand about that?
 
Sorry about the same question over and over Im just trying to get everything answered and should've gone back in this thread and re read some answers you guys had given me. So I think I've got this build set up.

1x Antec Twelve hundred case
4x 5.25" to 5 3.5" Bay converters
1x 1000watt corsair psu
1x 30gb-64gb SSD Drive for the OS
1x 880GMA Motherboard left over from HTPC build (never used cause I needed more pci slots)
2x 2x4GB Mushkin DDR3 Ram
2x 8 channel storage card
1x 3.4ghz AMD Quad Cpu
20x WD 2TB Green drives r mix them up with other 2TB Drives that are on sale, you can mix and match drives with software raid right?
4x Multi colored 120mm fans lol, want to make it look funky.

So overall does this look like a solid Raidz build with two vdevs running both running raidz2 in one zpool (did I say this correctly? I want it to show up as two massive storages under my computer and networked drives as 16TB each. I know it will be smaller cause the drive sizes and yada yada but you get my point right?) Did I do this all correctly? Am I buying the right parts? I know I'll probably need more 4 pin to 3x 4 pin connectors for the psu and alot of sata cables as well. But yea will this run a file server well enough for storing music, movies, tv shows, applications to back up and such? Let me know if there's anything I need to or should change
 
phoenix said:
What is so hard to understand about that?
Nothing really, I just wanted to see if I could of done a short cut, but loading a fresh copy of linux up and down won't be bad at all, I'm just thinking about my current restart times are taking about 5-10 minutes of pure hang time at the load screen when I'm used to about a 30 second restart. Darn Windows corruption some how :( But yea can you take a look at my build above this post and let me know if I've gone the right way? Thanks.
 
:( Could someone tell me if this build looks good? I would like to start ordering parts before they all sky rocket from the japanese disaster.
 
Blu said:
  • 1x Antec Twelve hundred case
  • 4x 5.25" to 5 3.5" Bay converters
  • 1x 1000watt corsair psu
  • 1x 30gb-64gb SSD Drive for the OS
  • 1x 880GMA Motherboard left over from HTPC build (never used cause I needed more pci slots)
  • 2x 2x4GB Mushkin DDR3 Ram
  • 2x 8 channel storage card
  • 1x 3.4ghz AMD Quad Cpu
  • 20x WD 2TB Green drives mix them up with other 2TB Drives that are on sale, you can mix and match drives with software raid right?
  • 4x Multi colored 120mm fans lol, want to make it look funky.

Hardware looks fine.
 
Blu said:
20x WD 2TB Green drives r mix them up with other 2TB Drives that are on sale, you can mix and match drives with software raid right?

Hi,

Those drives will be 4k advanced format drives, there seems to be a good way to handle these in FreeBSD now, see: http://forums.freebsd.org/showthread.php?t=21644. If you were playing it really safe you might go for non 4k drives.

Regarding mixing, just make sure if you buy 4k drives you mix with other 4k drives, or just 512 byte with 512 byte.

Thanks Andy.
 
Hi,

just raising a concern for those storage controllers. They seem to be AOC-USAS-L8I, which are what Supermicro calls UIO. UIO cards are (as far as I understand it) pci-e cards turned backwards and therefore only fits on Supermicro boards with the corresponding UIO slot(?)

Anyone please correct me if Im wrong, cause this sounds almost too cheap to be true=)

/Sebulon
 
phoenix said:
It all depends on which is more important to you: speed or storage space.

If you want speed, you need multiple small vdevs. As in, 3x 8-disk raidz2.
Thanks for posting all this phoenix, it is obvious that all of the experience you have is "hard won", and it would be unwise to ignore it.

Especially interesting about the power supply issues of a large NAS. If you do the math you are correct. 39W per HDD is a conservative figure to use, so 20*39 is 780. Add some power for your CPU, mobo and 8GB of RAM and you are near 1000W.

Do you have any recommendations of PSU brands/models? I like the Seasonic X-650 as it is both quiet, extremely efficient and has great electrical performance. Unfortunately in their X range, the X-850 is as high as they go. Of course, I am not looking to build a 20 HDD NAS, probably 8 is as much as I would go.
 
User23 said:
39W ???

You mean max power consumption on startup? Usually the good storage controller starting the drives one by one and not all at once. And after the startup the consumption is lower.

6,x W
http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-701229.pdf

or up to

12,x W
http://www.hitachigst.com/tech/tech...F64DBCC8825782300026498/$file/US7K3000_ds.pdf

with a 3TB 7200rpm drive.
How about 3A on a 12V rail? In other words, 3*12=36W? Here. From the page I linked earlier.

What happens when your drives have gone to sleep and someone wants to copy a file from one zraid2 pool to the other? Does your good controller stage them, or do they all start up at once?
 
Sebulon said:
just raising a concern for those storage controllers. They seem to be AOC-USAS-L8I, which are what Supermicro calls UIO. UIO cards are (as far as I understand it) pci-e cards turned backwards and therefore only fits on Supermicro boards with the corresponding UIO slot(?)

Incorrect.

UIO boards are standard PCIe (PCI-Express) cards, and will work in any PCIe slot.

The difference between a UIO board and a standard PCIe board? Which side of the card the bracket attaches to. UIO boards have reversed PCI brackets, so you can't use the included bracket in normal case with a normal motherboard.

However, if you remove the bracket, the card works perfectly well in any PCI slot. And you can even buy "normal" brackets for these cards, if you really want to screw it down to the case.
 
carlton_draught said:
Thanks for posting all this phoenix, it is obvious that all of the experience you have is "hard won", and it would be unwise to ignore it.

Especially interesting about the power supply issues of a large NAS. If you do the math you are correct. 39W per HDD is a conservative figure to use, so 20*39 is 780. Add some power for your CPU, mobo and 8GB of RAM and you are near 1000W.

Do you have any recommendations of PSU brands/models? I like the Seasonic X-650 as it is both quiet, extremely efficient and has great electrical performance. Unfortunately in their X range, the X-850 is as high as they go. Of course, I am not looking to build a 20 HDD NAS, probably 8 is as much as I would go.

Our PSUs come with the rackmount chassis, and are hot-swappable with 4 "power-unit" bays. Depending on the use, we fill either 3 or 4 of the bays. Our largest is a 4-way setup with 1300W total power.

I have no experience with non-rackmount PSUs. The last PSU I bought was one of the first modular PSUs, an X-Power, for my ancient desktop. (The local computer shop laughed at me when they saw the sea of cables sticking out of the PSU thinking it was some joke; a year or two later, pretty much every PSU company came out with a modular version.)

All I can recommend is to not skimp on the PSU.
 
phoenix said:
Incorrect.

UIO boards are standard PCIe (PCI-Express) cards, and will work in any PCIe slot.

The difference between a UIO board and a standard PCIe board? Which side of the card the bracket attaches to. UIO boards have reversed PCI brackets, so you can't use the included bracket in normal case with a normal motherboard.

However, if you remove the bracket, the card works perfectly well in any PCI slot. And you can even buy "normal" brackets for these cards, if you really want to screw it down to the case.

Oh, thank god!=)
 
carlton_draught said:
How about 3A on a 12V rail? In other words, 3*12=36W? Here. From the page I linked earlier.

What happens when your drives have gone to sleep and someone wants to copy a file from one zraid2 pool to the other? Does your good controller stage them, or do they all start up at once?

Yes, you are right, I am pretty sure everyone with disk arrays as big as this let the disk sleep and wake on demand.
 
User23 said:
Yes, you are right, I am pretty sure everyone with disk arrays as big as this let the disk sleep and wake on demand.

From the TS:
I'm basically looking for a massive storage system for my home network for HD movies and HD tv shows
I think there is a fair chance he will leave it on 24/7 for easy access (and maybe he has family who could potentially be watching stuff served by the NAS at any time of the day, so are ill-served by scheduled system power downs), but not want to pay money to have his disks spinning when they aren't being used. If this is the case, then you want a PSU that is able to cope with high peak loads (~1kW) but will also be efficient at low power draws.
 
Blu said:
Another odd question, does Raidz show me the drives serial number to make it easier to keep track of the drives? Or am I going to have to say add the hard drives one by one to keep track of which is which and label them in free BSD as I do this? I know some windows programs will show you the drives serial # and I figured this would be easy to keep track of them. To write down the serial on a piece of paper and tape it to the enclosure the drive is in and such to make it easier to know what drive is what when I use glabel to mark them in the zpool. Also you say it will be listed in the command "zpool status" don't you mean it won't appear there? or will that just show me what drives are missing from the zpool. Just want to make sure I got all this right cause I'm hoping to start ordering some parts soon.

Sorry for the dump, but no one answered the question, and I believe it is a quite common question to have when setting up a server with quite a few HDDS.

No, you don't need to plug in one hard drive, power on, label using glabel, power off, connect another drive, power on label... etc.

If you install smartmontools, you can run smartctl -i /dev/ad6 (or whatever the path to your hard drive device is) and get the serial number of the drive.

ex:
Code:
# smartctl -i /dev/ad6
smartctl 5.40 2010-10-16 r3189 [FreeBSD 8.2-RELEASE amd64] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Barracuda 7200.11 family
Device Model:     ST31500341AS
Serial Number:    9VS1CAVS
Firmware Version: CC1H
User Capacity:    1,500,301,910,016 bytes
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   8
ATA Standard is:  ATA-8-ACS revision 4
Local Time is:    Thu Apr 14 14:51:36 2011 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

You can then label them with glabel accordingly. :)

If you are going to have twenty drives, it will save you a bit of time.

edit: of course, with the hardware you list, I guess you could just hot-plug in the disks. Oh well, could be useful info for someone anyways.
 
Another way of determining the serial number of the drive with in-house means:
# [man]diskinfo[/man] -v /dev/ada0 | grep Disk\ ident

For some redundancy in labeling you can use GPT labels.
# [man]gpart[/man] create -s GPT ada0
# # gpart add -b start -s size -t freebsd-zfs [b]-l label[/b] ada0
 
Back
Top