SSDs, fsck and FreeBSD 11.*

I briefly looked at the net for information about SSDs (Solid State Drives) and freeBSD and found only confusing articles on the subject. Obviously it is not the fault of the authors it's just that I am not well versed on hardware.

Anyway I decided to put my queries here as I have a project on hand and I always like the friendly nature of the freeBSD community. I am upgrading my server to contain an SSD and I have never played with these things on freeBSD.

The SSD in question is 120G in size and I want to know a couple of things before I proceed. Like is fsck still good to go with this hardware, and if so how should you use it?

Are SSDs easy to do a fresh install on? I would like to go with the latest release version of freeBSD 11.*.

Because SSDs don't store that much data for the buck I want to have another drive attached as well this time a 1TB HD. How do I go about changing the mount point for /usr/home from being on the main drive (SSD) to the HD instead? I would normally have the whole HD partitioned to contain samba shares etc but I worry about disk space with home drives and particullarly the /var/log directory (where my favourite file messages is as I love to watch people/bots try and hack the box, whois is such a cool command).

Is the above configuration possible with a hybrid disk system? Is fsck still going to work? Does the installer itself have a disk tool for formatting and changing mount points in freeBSD 11.*?

Thanking you in advance,
Jonathan.
 
Jonathan,

I'm not too familiar with fsck. However, I do have a SSD in my laptop and I can say it works terrific with both FreeBSD 11-Stable and FreeBSD 12-Current. I'm using the following file system:

zfs (which has lots of amazing features. It seems to be really leading.)

Extremely fast and efficient!

Hope that helps. I'm not sure if fsck is from previous versions of FreeBSD. Just haven't seen anything regarding it since I just recently came to Unix from Linux.

Best Regards,

Brandon
 
ZFS does not have/need fsck; it verifies checksums at every read and can be scheduled to scrub the filesystems periodically.

SSDs are usually completely painless to use and install onto - regardless if they are attached via SATA, SAS, NVMe or PCIe. Apart from different device names they can all be used for the root pool and freebsd can boot from all types.
The only limitation are buggy/crappy BIOS/UEFI especially on (cheap) consumer boards and notebooks, which sometimes can't boot from PCIe or NVMe or need ugly workarounds e.g. via legacy-boot.

Regarding the second disk: If you don't actually need the space right now or in the near future; I'd go with a second SSD for mirroring first, so ZFS can use its full potential regarding file integrity. Once you actually need the extra space, you can either add another set of SSDs (which might be a lot cheaper then) to the pool or replace the existing ones with either bigger SSDs or spinning rust.
Using a second pool (on slow, big, hard drives) is also possible. If you add that pool on installation or later doesn't matter as you can easily transfer the /usr/home dataset to a new pool and set the mountpoints accordingly.

ZFS is very flexible when it comes to expanding a pool or even adding new pools. I suggest you read the chapter on ZFS from the FreeBSD handbook [1] to get a good overview on how ZFS can really simplify how you handle storage.

[1] https://www.freebsd.org/doc/handbook/zfs.html
 
A: SSDs work just fine. They are, after all, nothing but normal block devices, and usually have completely normal hardware interfaces (SATA, some SAS) that are tried and tested. When one gets into the more exotic interfaces (NVMe, MMC) it might get hairy.

B: You need fsck as much or as little as you need it on spinning rust disks: ZFS doesn't need it, and FFS with soft updates rarely does. The good news is: it runs faster on SSDs.

C: Normal file systems (I've tried FFS and ZFS on FreeBSD) work just fine. The "trim" functionality is supported in both FFS and ZFS. Don't expect trim to work wonders, and make your SSDs run even much faster or last forever.

D: All SSDs have endurance problems, that lies in the nature of using flash for storage. File systems make it worse, by performing write amplification. While one would think that SSDs fail by first becoming slower (less spare space within the drive) and then showing clean write errors when they are out of writeable blocks, all with SMART reporting of their status, the reality is different: When SSDs wear out, they will too often just brick themselves, or act bizarre. On the other hand, an amateur home workload is unlikely to wear out an SSD. Still, even with SSDs one still needs a good backup and redundancy plan. Using ZFS mirroring is a great first step in that (but not a complete solution). But note that even spinning rust disks today have endurance problems: modern disk drives are spec'ed at typically 550TB/year of write, which for large 10T drives means that you can only overwrite them once a week. Again, amateurs at home are unlikely to run into these limits.

E: Adding a separate disk for /usr/home or /home is an excellent idea. If all you need is 1TB, there are many inexpensive options. You can even get SSDs that size and larger (but the large SSDs do get a little pricey). Segregating those directory trees into a separate file system (perhaps on separate devices) is easy, and doesn't impede whether/which fsck to use. Personally, I would do it by issuing the appropriate mkfs or zpool create commands from the command line, and then hand-editing /etc/fstab. To each his own.

Here is a proposal which is a little more expensive but gives you excellent performance: You say you need roughly 100GB for the OS install itself (sounds reasonable within a factor of 3), and you want 1TB for user data (in /usr/home or similar). Let's assume that loss of your user data is easier to tolerate, because you'll have an excellent backup solution (like a physically removable drive with a separate file system). In that case, it would be a good idea to set up the root file system as mirrored (so loss of a drive doesn't make the system unusable), but set the user file system up as non-redundant (since in case of drive failure, you have the backup). So buy two SSDs of 600GB capacity, partition them each into a 100GB and 500GB part, use the two 100GB partitions to build a 2-way mirrored OS file system (using ZFS), and use the two 500GB partitions for a 1TB non-redundant file system (again ZFS).

Another proposal would be to do the opposite: If your requirements on availability of the server are not very high (it's OK if it is down for two days), but your requirements for reliability of the user data are very high (loss of data would be catastrophic, and even going back to the daily backup would be very annoying), then buy the smallest SSD that fits your OS install, and make it non-redundant. After all, if your OS vanishes, you can re-install it within a day after buying a new boot disk. Then invest all the money you saved on getting only a small SSD into buying two 1TB spinning disks, and run all your user data mirrored in ZFS.

Observe that I used the words "availability" and "reliability" above; they mean different things. Before you make a decision what to buy, you need to think through what you really want (meaning do some requirements analysis).
 
Back
Top