A FreeBSD NAS is born (part 2)

My latest ZFS system is quite a bit more than just a NAS, but data storage will be its primary function, and the plan is to populate it with disks up to 24 TB raw (16 TB RAIDZ2). Here's the low down:

  • Antec Twelve Hundred
  • Intel DH67BL
  • Intel Core i5 2500
  • 4x 4 GB 1333 MHz DDR3
  • Corsair AX750
  • Supermicro AOC-USAS2-L8i
  • 6x Western Digital Caviar Black 2 TB
  • 2x OCZ Vertex 2 240 GB
  • 4x Icy Dock MB973SP-B

So far all's working as expected on a recent 8.2-STABLE checkout. Boot time dmesg attached.
 

Attachments

I'm booting off the two OCZ SSDs that have two primary partitions each, with one partition from each in a gmirror(4) volume. I'm using two boot areas so that upgrades can be tested on one while leaving a fully working system on the other. Third partition for swap space, forth for mirrored ZIL, and fifth either spare or for L2ARC later.

Partitions all 1 MiB aligned, except the first (corrected in its BSD label):

Code:
# gpart show
=>       63  468862065  ada2  MBR  (223G)
         63   67221441     1  freebsd  [active]  (32G)
   67221504   67221504     2  freebsd  (32G)
  134443008   67221441     3  !218  (32G)
  201664449  267197679     4  !15  (127G)

=>       63  468862065  ada3  MBR  (223G)
         63   67221441     1  freebsd  [active]  (32G)
   67221504   67221504     2  freebsd  (32G)
  134443008   67221441     3  !218  (32G)
  201664449  267197679     4  !15  (127G)

=>        0  267197679  ada2s4  EBR  (127G)
          0   33675264       1  !218  (16G)
   33675264  201406464  534529  !218  (96G)
  235081728   32115951          - free -  (15G)

=>        0  267197679  ada3s4  EBR  (127G)
          0   33675264       1  !218  (16G)
   33675264  201406464  534529  !218  (96G)
  235081728   32115951          - free -  (15G)

=>       0  67221440  mirror/shrekGM0  BSD  (32G)
         0      1985                   - free -  (992k)
      1985   2097152                1  freebsd-ufs  (1.0G)
   2099137   8388608                2  freebsd-ufs  (4.0G)
  10487745  33554432                4  freebsd-ufs  (16G)
  44042177  23179263                   - free -  (11G)

=>       0  67221503  mirror/shrekGM1  BSD  (32G)
         0      2048                   - free -  (1.0M)
      2048   2097152                1  freebsd-ufs  (1.0G)
   2099200   8388608                2  freebsd-ufs  (4.0G)
  10487808  33554432                4  freebsd-ufs  (16G)
  44042240  23179263                   - free -  (11G)

Code:
# zpool status
  pool: data
 state: ONLINE
 scrub: none requested
config:

	NAME                 STATE     READ WRITE CKSUM
	data                 ONLINE       0     0     0
	  raidz2             ONLINE       0     0     0
	    da0              ONLINE       0     0     0
	    da1              ONLINE       0     0     0
	    da2              ONLINE       0     0     0
	    da3              ONLINE       0     0     0
	    ada0             ONLINE       0     0     0
	    ada1             ONLINE       0     0     0
	logs
	  mirror             ONLINE       0     0     0
	    ada2s4+00000001  ONLINE       0     0     0
	    ada3s4+00000001  ONLINE       0     0     0

Some case photos attached. :)
 

Attachments

  • nas1.jpg
    nas1.jpg
    83.5 KB · Views: 612
  • nas2.jpg
    nas2.jpg
    70.4 KB · Views: 509
  • nas3.jpg
    nas3.jpg
    97.6 KB · Views: 529
More photos - internals showing how the Supermicro UIO card fits. It kinda just sits there as there's no support from the case.

Finally, half populated system powered on. I'm not a huge fan of all these blue LEDs, but the darn things are hard to avoid these days! :)
 

Attachments

  • nas6.jpg
    nas6.jpg
    95.5 KB · Views: 550
  • nas5.jpg
    nas5.jpg
    44.8 KB · Views: 561
  • nas4.jpg
    nas4.jpg
    92.4 KB · Views: 660
Really nice man. Looking good! :P
edit:
Sorry couple of questions;
Why Supermicro AOC-USAS2-L8i and what driver does it use? I read it supports Sata3 but your disks are Sata2 (might be wrong on that one)? And, how many Sata ports there (I see only 4 on some pics)?
:D
 
AOC is the model, USAS2 is the disk format (SAS2) which is 6 Gbs SAS (support 6 Gbps SATA as well), and L8i means internal multi-lane connectors with 8 channels.

The AOC-USAS2-L8i uses the LSI2008 chipset, supported by the new mps(4) driver. This is the replacement for the old standby AOC-USAS-L8i which uses the LSI1068 chipset and the mpt(4) driver.

The LSI1068 chipset has a hardware limitation of 2 TB per physical disk, so using the LSI2008 chipset allows for upgrading to newer, better, faster, larger disks without needing to replace the controller.
 
I see. I have Gigabyte EX58-UD4 motherboard with Intel i7 on it; it has 6 Sata ports on Intel controller and extra two GSata (JMicron I believe) running over PCIe buss.
Would there be any (serious) advantage (in terms of stability) of running an external controller such as this one (which is why I was asking) or would the default ports on MB be fine for a big storage server possibly running several TB of data in the future. Seems this one is best investment for its cost but I would most likely have problems finding it here.
Thank You
 
So long as the mobo ports support AHCI mode, then you can use those as well.

It really depends on how many PCIe lanes are assigned to the mobo SATA ports. You don't want to stuff 8 drives onto there only to find out that all the onboard ports are connected via a single x1 link. :)

Plus, it depends on how well the driver handles hot-swap, hot-plug, and "dead device" notifications. And how well the management tools work.

The more PCIe buses you can spread the load around onto, the more controllers you can spread the load around onto, the more harddrives you can spread the load around onto ... the better.
 
Yeah it does support AHCI. Well the idea was to run 6 disks on Intel controller with for storage, and one disk on JMicron for base system. In any case my read/write speed right now on 3 disks seems to be fixed at around 30MB/s, which I would contribute to AES-256 running over it. Not sure if that's normal. In case that is normal, there really wouldn't be issue even if there is some sharing over buses.
 
Good answers from Phoenix.

I can confirm that mps(4) and this card handles hot-swap and hot-plug fine. Dead device seems to cause a cryptic kernel message that repeats every second.

We chose this card because in total we'll have 14 disks attached - 4-5 on board, 8 on the Supermicro controller, and 1-2 on an siis(4) based controller.

I'm looking forward to Z68 based boards as that will make it easy to have two Supermicro controllers (2x PCIe x8), plus the usual onboard controller.
 
aragon,
I have a question about your case, if you don't mind answering please.
How's the cooling for your hard drives; any possible issues with that in the future?
In other words are you satisfied with it?
 
Airflow in the case is beyond excellent. All the Icydocks have fans themselves too, and no drive goes above 46 Celcius. I don't see any cooling issues provided the fans are reliable.
 
Oh I see, Icy docks...nice. Did the case come with 3.5" internal space for HDDs, or is it just empty 5.25" shell an you need Icy dock?
Oh, and one more thing. How's noise from fans?! :)

Thanks much!
 
The case itself is all 5.25", but it comes with 3.5" bay reducers pre-installed. The bay reducers are quite decent too - fanned, air filtering, vibration absorption, and thumbscrews.
 
I got the opportunity to build another ZFS system similar to this previous one. The biggest difference is that this new system is running a Z68 board, which means I could use two Supermicro AOC-USAS2-L8i cards instead of one. All 12 disks in its pool are connected via mps(4), and there's still room for some more x4, x1, and/or PCI cards. Space for more disks is tight though, so 12 will be the max. The motherboard is an Asus P8Z68-V, which I recommend.

Some photos and boot time dmesg attached. :)

Oh, and this:

Code:
$ zpool list
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
data  32.5T  2.00G  32.5T     0%  1.00x  ONLINE  -
 

Attachments

  • zfs4.jpg
    zfs4.jpg
    94.3 KB · Views: 493
  • zfs3.jpg
    zfs3.jpg
    96.5 KB · Views: 448
  • zfs2.jpg
    zfs2.jpg
    95.9 KB · Views: 443
  • zfs1.jpg
    zfs1.jpg
    92.2 KB · Views: 343
  • dmesg.boot.txt
    dmesg.boot.txt
    10.1 KB · Views: 281
How did you attach the UIO-cards? It seems as though you just unscrewed them from the PCI-slot-cover-plate, and then plugged them into PCIe slot. Is my assumption correct, or did you somehow fabricate a new PCI-coverplate?
 
Does SuperMicro sell them, or are they from another vendor?

Googling suggest that all that needs to be done is to reverse the cover-slot with the help of some spacers.
 
aragon said:
I got the opportunity to build another ZFS system similar to this previous one. The biggest difference is that this new system is running a Z68 board, which means I could use two Supermicro AOC-USAS2-L8i cards instead of one. All 12 disks in its pool are connected via mps(4), and there's still room for some more x4, x1, and/or PCI cards. Space for more disks is tight though, so 12 will be the max. The motherboard is an Asus P8Z68-V, which I recommend.

Nice project!

One question. You say the P8Z68-V allows for two Supermicro AOC-USAS2-L8i cards instead of one. Is that because your previous version of the NAS with the Intel mobo had only one 16x PCIe slot?

Two question: Have you done any performance tests?
 
mix_room said:
Does SuperMicro sell them, or are they from another vendor?

Other vendors. The SuperMicro UIO cards are very popular, so a couple different vendors make brackets for them.
 
Back
Top