ZFS 4 drives coming - raid-z1, or what

In the end, you'll pick some setup and find out whether you're happy with it or not over the course of a few months or years. Next time around, you'll set it up the same way but slightly improved, or you'll do something else entirely to see if it makes you happier. Just keep the most important data backed up on a separate machine/external drive, and you'll be fine.

The FreeBSD community enjoys finding the last half of a percent in performance increase or discerning reliability in the case of concurrent meteorite impacts and zombie apocalypse for workloads comparable to Amazon.com. As a private user, just have fun trying stuff out, keep decent backups and don't worry too much about the details ;)

I've found that zfs enlightenment comes first from understanding what it actually does (which is very complicated), and then carefully trying to optimize it for fun or profit (which is very, very complicated).

Oh, and: using partitions instead of drives gives you the chance of using GPT labels to identify your drives. Instead of having a pool made from ada0, ada1 and ada2, it could be /dev/gpt/topbay, /dev/gpt/midbay and /dev/gpt/lowerbay-replacement. This also protects you against fallout from the shifting of drive numbering (i.e. ada2 becomes ada3 for some reason).
 
My humble opinion: Go with RAID-Z1. Of course it depends on your preferences, you can "buy" more redundancy by sacrificing more space. I just think for an array with 4 disks, RAID-Z1 is a good and solid compromise. And it's important to always remember: No level of RAID will ever replace decent backups. Not only is there always the risk that your redundancy is just not enough (with disks failing at the most unconvenient time, e.g. during resilver). Human error is also a factor ;) Data destroyed by a stupid command typed as root isn't all that uncommon.

I personally have a RAID-Z1 pool consisting of four 4TB disks. So far it worked great, and I did have a failed disk once. When that happened, I noticed that I first had to order a replacement… of course, having a full backup on an external 12TB USB disk, this still felt safe enough, but nevertheless, I now have two replacement disks sitting here waiting until they are needed.
 
Zirias how did you find out about the failure? The last time I had a failure, I just noticed the performance degradation through usage. Now, I do scrubs periodically and get emails as root, about those and zpool status reports. Is that the best I can do, or is there a more active monitoring option?

Or, put another way, how do you and others get status updates... zpool status in .bash_profile or suchlike?
 
Oh, and: using partitions instead of drives gives you the chance of using GPT labels to identify your drives. Instead of having a pool made from ada0, ada1 and ada2, it could be /dev/gpt/topbay, /dev/gpt/midbay and /dev/gpt/lowerbay-replacement. This also protects you against fallout from the shifting of drive numbering (i.e. ada2 becomes ada3 for some reason).

When it comes for naming drives the physical position is only practical if you *never* touch those drives again. It's usually more practical to use part of the serialnumber. e.g. I'm using an abbreviation of the vendor name (i.e. WD, HG, SE...) and last 8 digits of the serial number. Put this on the label on the disk caddy, and you can easily identify a disk physically with 100% certainty. Locating a disk is the job of tools like sesutil(8) (if your hardware supports it), but naming disks that can be moved around or into another system after the bay they were in at one time is usually a recipe for disaster...

There was a thread about this topic not long ago where I (and others) laid out the pros/cons of several variants: https://forums.freebsd.org/threads/...devs-for-zfs-pools-in-2021.79161/#post-497836
 
[FONT=monospace]Zirias[/FONT] how did you find out about the failure? The last time I had a failure, I just noticed the performance degradation through usage.
Well, yes, I noticed bad performance, had a look in the logs and was greeted by a steady stream of AHCI errors from the kernel, so the problem was pretty obvious by then ;)
Now, I do scrubs periodically and get emails as root, about those and zpool status reports. Is that the best I can do, or is there a more active monitoring option?
I assume there are a lot of options. A simple step is to redirect these mails to a mailbox you actually read ;) I didn't do anything other in that direction…
 
smartmontools can be included in daily status.
One can alse redirect "periodic" output to log files instead of email to root. I find that a bit more convenient.
You may need to tweak log rotation stuff if you do this
Code:
# /etc/periodic.conf
# periodic.conf overrides
# output to file
daily_output="/var/log/daily.log"
daily_status_security_output="/var/log/dailysecurity.log"
daily_status_network_usedns="NO"
daily_status_named_usedns="NO"
daily_clean_tmps_enable="YES"
daily_status_ntpd_enable="NO"
daily_status_zfs_enable="YES"
daily_scrub_zfs_enable="NO"    # set to YES for autoscrubbing at threshold days
daily_scrub_zfs_default_threshold="45"          # days between scrubs
daily_status_smart_enable="YES"
daily_status_smart_devices="/dev/ada0"
daily_queuerun_enable="NO"
weekly_output="/var/log/weekly.log"
weekly_status_security_output="/var/log/weeklysecurity.log"
monthly_output="/var/log/monthly.log"
monthly_status_security_output="/var/log/monthlysecurity.log"
 
Zirias how did you find out about the failure? The last time I had a failure, I just noticed the performance degradation through usage. Now, I do scrubs periodically and get emails as root, about those and zpool status reports. Is that the best I can do, or is there a more active monitoring option?

Or, put another way, how do you and others get status updates... zpool status in .bash_profile or suchlike?
Crontabbed daily scrubs, e-mailed by smtp-cli
 
When it comes for naming drives the physical position is only practical if you *never* touch those drives again. It's usually more practical to use part of the serialnumber. e.g. I'm using an abbreviation of the vendor name (i.e. WD, HG, SE...) and last 8 digits of the serial number. Put this on the label on the disk caddy, and you can easily identify a disk physically with 100% certainty. Locating a disk is the job of tools like sesutil(8) (if your hardware supports it), but naming disks that can be moved around or into another system after the bay they were in at one time is usually a recipe for disaster...
Unless you have a bulletproof system for identifying disks (by their serial number, WWN and such) and tracking their location (using sesutil etc.), and remembering where a disk was last seen (often when a disk is unresponsive, the location utilities also stop working), I would follow this advice and put big paper labels on the disk, on which you write (in big letters) the name of the disk. I use a very simple algorithm: Brand of the disk (HD or Sea), year purchased (14 or 19), if necessary which disk of the year (single digit is sufficient). For my home system, that is enough. And then I partition the disk with gpart, and the gpt labels match the disk name, with a short name of the partition added (like hd19_home or sea_14_2_backup).
 
I don't know about OP, but I feel like a person with two fishing rods by a pond, exchanging best-practice advice with people managing fleets of high-sea trawlers 😬
 
I agree with mtu but it does show how flexible FreeBSD can be. One just needs to figure out your specific requirements and needs. Typical home desktop, generic tower case, say a midsized, will have enough internal slots and connectors for say 4 to 6 drives. Easy enough to take a black Sharpie and physically write on drive labels or since the inside of the case is usually not coated, write on that. I've been doing that for years. Dates, what I did, heck you can even draw out the drive label to physical mappings (this lets you label your drives something like Matthew Mark Luke and John). It's hard to lose the info when it's written on the inside.
Fun discussing all of this, gets one thinking "If I had unlimited budget and the spouse would let me, what would I do"? I'm coming to the conclusion that form factors like a NUC with somekind of network attached storage is a good thing, even for home users.
 
Back
Top