Looking for a new harddisk setup for my home server.....

oliver@

Developer
Hi,

right now I have the following configuration:

Code:
# zpool status
  pool: zbackup
 state: ONLINE
  scan: scrub repaired 0 in 7h5m with 0 errors on Fri Dec  1 22:02:33 2017
config:

        NAME          STATE     READ WRITE CKSUM
        zbackup       ONLINE       0     0     0
          gpt/backup  ONLINE       0     0     0

errors: No known data errors

  pool: zroot
 state: ONLINE
  scan: resilvered 456G in 3h7m with 0 errors on Thu May 17 13:17:48 2018
config:

        NAME           STATE     READ WRITE CKSUM
        zroot          ONLINE       0     0     0
          mirror-0     ONLINE       0     0     0
            gpt/disk0  ONLINE       0     0     0
            gpt/disk1  ONLINE       0     0     0

errors: No known data errors

  pool: zusers
 state: ONLINE
  scan: scrub repaired 0 in 3h45m with 0 errors on Fri Dec  1 18:42:25 2017
config:

        NAME           STATE     READ WRITE CKSUM
        zusers         ONLINE       0     0     0
          raidz1-0     ONLINE       0     0     0
            gpt/disk2  ONLINE       0     0     0
            gpt/disk3  ONLINE       0     0     0
            gpt/disk4  ONLINE       0     0     0
            gpt/disk5  ONLINE       0     0     0

errors: No known data errors
# zpool list
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zbackup  3.62T  3.23T   401G        -         -    51%    89%  1.00x  ONLINE  -
zroot     920G   478G   442G        -         -    29%    51%  1.00x  ONLINE  -
zusers   3.62T  2.40T  1.23T        -         -    16%    66%  1.00x  ONLINE  -

zbackup is occupied by bacula and out of scope here.
zroot contains all sort of filesystems for the base system and the jails.
zusers contains filesystems for my and my wifes personal data.

Each disk of zroot and zusers is 1TB of size.

zroot:
- SAMSUNG Spinpoint F3R (HE103SJ), 1TB, 32MB, SATA-2, 7.1W R/W, 6.2W Idle
- SAMSUNG Spinpoint F2 EcoGreen (HD103SI), 1TB, 32MB, SATA-2, 5.6W R/W, 4.4W Idle (replacement for a failed F3R in May 2018)

zusers:
- WD Caviar Green (WD10EADS), 1TB, 32MB, SATA-2, 5.4W R/W, 2.8W Idle
- WD Caviar Green (WD10EADS), 1TB, 32MB, SATA-2, 5.4W R/W, 2.8W Idle
- WD Caviar Green (WD10EADS), 1TB, 32MB, SATA-2, 5.4W R/W, 2.8W Idle
- WD Caviar Green (WD10EADS), 1TB, 32MB, SATA-2, 5.4W R/W, 2.8W Idle

The zusers disks are now 9 years old (SMART spinup time: 8.5 years).
The zroot disks are 7 years old and one already failed in May.

I'm now considering replacing all the 6 disks by one RAIDZ configuration. I fear more harddisk loses in the future because of the age of the drives.
I'm not sure what configuration I should go for.....
- I don't need much more overall disk capacity, but who knows what comes in the future. Given the time I plan to use the new disks I want at least to double the available cappacity.
- Performance is also not the top priority. All they need to achive is saturating 1Gbit network transfer rates in write and read.
- I'm taking power consumptions under consideration as the system runs 24/7.


My plans:
- RAIDZ1 with 3 disks each 4 TB of size: Western Digital WD Red 4TB, 3.5", SATA 6Gb/s (WD40EFRX) which would give me: 8TB disk capacity at a cost of ~350,-€ and a r/w power consumption of 13,5W
- RAIDZ2 with 5 disks each 3 TB of size: Western Digital WD Red 3TB, 3.5", SATA 6Gb/s (WD30EFRX) which would give me: 9TB disk capacity at a cost of ~460,-€ and a r/w power consumption of 20,5W

- Power consumption difference would mean 15-20€ more to pay each year for the RAIDZ2 configuration
- The RAIDZ2 configuration is 110,-€ more expensive
+ The RAIDZ2 configuration would allow two drive failures at the same time.

Given the fact that in the past 9 years only 1 out of 6 harddisks failed I'm tending to go again for RAIDZ1 and not for RAIDZ2 even if it might be more secure.....

What would you go for? Or would you choose something completly different? Any proposals?

Best Regards,
Oliver
 
I would go for the RAIDZ1 with 3 disks. Since you expect to have enough room in the system to hold 5 disks, this gives you room to grow in the future by just adding more disks. Plus, at least in my part of the world, 4TB drives are the optimal $/MB option at the moment.

I personally would also add an SSD and configure it for swap, /tmp, and possibly L2arc (or ZIL) to see if I can get better overall performance. If the SSD doesn't contain any binaries or permanent data, then it doesn't need redundancy. Just watch the SMART stats and replace it when it's nearing EOL. If it fails, your system will probably crash but re-configuring to operate without the SSD won't take long.
 
Hi,

thanks for your opinion. The system can hold up to six 3½" disks physically. I have to think about your SSD point. I have a 64GB disk lying around but I like the fact that I can just continue to work with the system when a drive breaks which wouldn't be the case with the SSD setup. If it breaks I've to "do something" to get the system back running. I need the system 24/7 and I just might not have the time to "do something" the moment the SSD might die....
 
Hi,

It's worth looking at the backblaze hard drive stats before you buy disks. There's some valuable insights there.

I have 5 x 2 GB WD Reds in RAIDZ1 configuration.

For simplicity, the root is on a separate ufs mirror (2 x velociraptors).

I have lost one disk in about 6 years.

My ZFS server is too big to back up (well, the really valuable bits are). In any event, a low risk approach to RAID is appropriate.

I deal with lots of physical systems at work, and have seen enough drive failures to know I want more than one parity disk.

You can't change from RAIDZ1 to RAIDZ2 without a complete rebuild. So consider your RAID method carefully, up front.

My next rebuild will use smallish mirror'd SSDs for the ufs root (they don't need disk slots if you have velcro), and RAIDZ2 for the tank.

If you are worried about declining reliability, smartd(8) is worth looking at.

(The Spinpoints were venerable! I still have a couple.)

Cheers,
 
This is my current disk setup:

Code:
strata:/root 1026 ### ->df
Filesystem   1M-blocks Used Avail Capacity  Mounted on
/dev/ada0s1a      3958 3638     3   100%    /
devfs                0    0     0   100%    /dev
/dev/ada0s1d      3958 1412  2229    39%    /var
/dev/ada0s1e      3958   32  3609     1%    /tmp
/dev/ada0s1f     22804  109 20870     1%    /home
/dev/ada1s1b      3958 2934   707    81%    /usr/src
/dev/ada1s1d      7916 5736  1546    79%    /usr/obj
/dev/ada1s1e      1983 1131   693    62%    /usr/doc
/dev/ada1s1f      7916 2172  5110    30%    /usr/ports
/dev/ada1s1g     87066 2092 78008     3%    /usr/local

As others suggested, /var, /tmp, and swap can be on a SSD for improved performance. However, for reliability, I would go with a RAID 1 setup (mirroring). As for error detection when a drive fails (SSD for instance), you can write a script that copies a file to the drive and then compare it to see if there was an error. Put it in /etc/periodic/daily so it runs every day. The firmware on the drive will make sure that write leveling is observed for the flash memory.
 
In the meantime I went for three 4TB Seagate IronWolf NAS disks.

Code:
# zpool status zroot
  pool: zroot
state: ONLINE
  scan: none requested
config:

        NAME           STATE     READ WRITE CKSUM
        zroot          ONLINE       0     0     0
          raidz1-0     ONLINE       0     0     0
            gpt/disk0  ONLINE       0     0     0
            gpt/disk1  ONLINE       0     0     0
            gpt/disk2  ONLINE       0     0     0

errors: No known data errors
# zpool list zroot
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zroot  10.9T   736G  10.2T        -         -     0%     6%  1.00x  ONLINE  -

I've zfs send | zfs recieve all my old filesystems to an external harddrive and used the Fixit prompt to send them back to the new array (with some magic glue here and there).
The restore is still not done completly (around 2TB of user data missing).
But the mission is now more or less accomplished ;)
 
Back
Top