Best disk configuration on my setup

Hello. I've been using FreeBSD on my pc for about a year now and decided to go fully FreeBSD, removing other multibooted OSes from my desktop. Currently I have a single 300GB UFS partition on my HDD that I used as root on FreeBSD. I'm trying to install one 120GB SSD for the OS and two 2TB HDDs for data storage (home folder, etc).

My machine's a bit old, but it has 32GB of ram and Haswell Xeon E5 so it's not that slow. I'm thinking of having ZFS on all three disks and putting /home, /tmp on one of the 2TBs and mount the other as /mnt/data but I'm not sure if this is the best way, especially since I don't quite understand best strategies for ZFS.

Also, after I install FreeBSD onto another drive, would copying over /usr, /etc, /var, /home and reinstalling all packages be sufficient?
Thanks!
 
Hello. I've been using FreeBSD on my pc for about a year now and decided to go fully FreeBSD, removing other multibooted OSes from my desktop. Currently I have a single 300GB UFS partition on my HDD that I used as root on FreeBSD. I'm trying to install one 120GB SSD for the OS and two 2TB HDDs for data storage (home folder, etc).

My machine's a bit old, but it has 32GB of ram and Haswell Xeon E5 so it's not that slow. I'm thinking of having ZFS on all three disks and putting /home, /tmp on one of the 2TBs and mount the other as /mnt/data but I'm not sure if this is the best way, especially since I don't quite understand best strategies for ZFS.

Also, after I install FreeBSD onto another drive, would copying over /usr, /etc, /var, /home and reinstalling all packages be sufficient?
Thanks!
Hello,

In one machine, I have a similar configuration with ZFS, two 2TB HDD-s mirrored and an extra 256GB cache drive for this configuration. The system is on the HDD-s, no separate drive. Having a big cache speeds up the system significantly and after some time mostly eliminated the HDD access. Also, the cache in ZFS is safe to use. It can be disconnected and nothing serious happens.
 
Hello,

In one machine, I have a similar configuration with ZFS, two 2TB HDD-s mirrored and an extra 256GB cache drive for this configuration. The system is on the HDD-s, no separate drive. Having a big cache speeds up the system significantly and after some time mostly eliminated the HDD access. Also, the cache in ZFS is safe to use. It can be disconnected and nothing serious happens.
I see. Is having a SSD cache on HDD actually faster than having the OS on the SSD?
I think I won't be really mirroring the drives as I lack storage though.. Just rsync backups of the most important data.
 
I see. Is having a SSD cache on HDD actually faster than having the OS on the SSD?
I think I won't be really mirroring the drives as I lack storage though.. Just rsync backups of the most important data.
No. Of course the OS on SSD is faster. In this case I did optimize the cost and did put everything on HDD. But I have noticed in another configuration with single SSD that it is even faster than there is another SSD with cache in this configuration. Somebody here may explain this. Because it is very difficult to measure the actual speed, I cannot publish any numbers in this case. This is just a feeling, applications seem to start faster. My hypothesis is that the OS can somehow access the cache faster than just the main SSD alone. The other part of the hypothesis is that the SSD controller can introduce some parallelism in data transfer. So, IMHO, it is a good idea to have a cache drive in the system.
 
I see. Is having a SSD cache on HDD actually faster than having the OS on the SSD?
I think I won't be really mirroring the drives as I lack storage though.. Just rsync backups of the most important data.
There is no problem in doing both. SSD does not have seek penalities, it can be split into various partitions.

As there is no mirror, I would use two distinct ZFS pools for OS and data, for easier recovery (a pool can get lost).

My desktop uses 40GB pool for the OS, plus 3GB for / /usr /var which are on UFS and not encrypted. I do not recommend such a split (because it is difficult to sort out which data should be encrypted), but anyway 50GB for an OS installation should do in many cases. That leaves the remainder as potential cache for the data storage.

What might also be an interesting option is the "special" device: a ZFS pool can place the frequently accessed metadata (and optionally also the small files) onto a separate device - which then is usually on a faster SSD, and leaves mostly sequential access to the mechanical disks. The dowside of this is: if you loose the special device, you loose the entire pool, and it cannot simply be removed after it has been added. (A cache can always be thrown away)
 
Maybe use parts of the SSD as L2ARC for the HDDs.

If not the internal SSD, you can use lowly USB flash drives.
I searched a bit and read L2ARC is only good if I have more than 64gigs of ram and aggresively rw on disk.. is this really meaningful for a lightweight desktop usage? My use of this machine is similar to most ordinary users, web browsing & document editing.
If its really needed I can add another 128GB ssd..
 
It's much simple to manage. It doesn't have data deduplication or compression and doesn't consume RAM memory for cache. It's still good choice when you have hardware raid or when you are using VM. It also have snapshots and it's easy to backup with dump and restore.
 
I searched a bit and read L2ARC is only good if I have more than 64gigs of ram
That is nonsense. L2arc needs 1-2 % of it's size as additional ram (more with small blocksizes, less with big ones).
So yes, if these people have 50 TB of storage and 5 TB of L2arc, they will need some extra ram.

and aggresively rw on disk..
That's another thing. I did try L2arc for desktop once, and did not see much benefit. I see massive benefit for database clusters, if the sizing is approriate. So there may be truth in that - but you need to try it out.

If its really needed I can add another 128GB ssd..
I would do that for mirroring. Recovering all the forgotten tunings of a base installation from some backup is not so much fun, and SSD don't tell you in advance when they would like to die (I've seen controller bugs bricking the piece)
 
From most docs I kind of see ZFS as a superior(?) filesystem to UFS or others.
Yeah, they're both filesystems, but you could say they because of their origin/history and design intent, they are like night and day.

I take it you're already (reasonably) familiar with UFS, its origins and capabilities; contrast that with for example with the original SUN article "The Zettabyte File System" (see FreeBSD Development: Books, Papers, Slides) and OpenZFS Basics by Matt Ahrens and George Wilson. Lots of great references to be found here in this forum.

Don't discard either out of hand. Especially for "low resource" (nowadays, that can be seen as a very flexible term) relatively unencumbered/traditional use, and embedded use, UFS just works; there, ZFS requires some nifty tuning, at least. For other circumstances you get a whole shebang of options and possibilities with ZFS, with some of the highest level of data integrity guarantees in the universe ;). Besides that, BEs are a great asset and, as a warning: do not underestimate the consequences of working with ZFS without redundancy, there is no such thing as a ZFS equivalent for fsck(8).

I think users can be easily overwhelmed by all those ZFS options, tuning parameters and just overall complexity of all the functionality that ZFS offers. However, ZFS tries to combine those with great care, but there's just no general simplification for complexity.

One of the greatest challenges, I think, is for (beginning) ZFS users to get to grips with and feel comfortable with a compact workable subset of all that ZFS has to offer.
 






Etc..
 
I see. Is having a SSD cache on HDD actually faster than having the OS on the SSD?
I think I won't be really mirroring the drives as I lack storage though.. Just rsync backups of the most important data.
Having the filesystem on the SSD is faster than trying to use the SSD as cache for a slower drive. If you have space, you can do both as mentioned above with partitions. If you explore different ways to offload work to the SSD, be aware that cheap consumer SSDs aren't designed for excessive write activity and will rapidly degrade with some uses. Whenever you go out to a slower drive without faster cache (RAM, SSD, etc.) then you get to enjoy the speed of that slower drive for that task. SSD as cache means it tries to play within the gap of fast RAM (ARC) and slow disks. There is some RAM used to keep track of the SSD's contents too. l2arc is faster than the slow magnetic disk but not faster than RAM. It caches data to speed up reads. Writes are only sped up by it if it offloaded some I/O of reads during a moment of read+write traffic. It may be helpful if your ARC is too small but doesn't replace what having more RAM or faster disks can do.

If your data isn't important and effort to reinstall+configure the OS doesn't matter then this is fine but otherwise I suggest a backup disk before SSD (cache or not). Besides, with a backup drive you have a place you can zfs send/recv to as you try different layouts to learn what works well for you.

If you maintain data you want copied from data you don't on different datasets, you can use zfs send/recv to transfer only the changes between them and only for the desired datasets.

ZFS compression can accelerate data throughput by making more fit through the common bottleneck of the drive's throughput. Similarly compression can make ZFS ARC work like its a bigger (but slower depending on decompression overhead/bottleneck) cache.

As more RAM gives the biggest performance impact more of the time and skipping the SSD cache for SSD as the drive is also better, is there a reason why you would throw multiple small SSDs with multiple large magnetics into it instead of 1+ big SSD? Is this based on drives already on hand, availabliity and pricing, open bays for install into the case?

If you use ZFS, it pairs nicer with SSDs as performance wasn't its top concern and seek times are one of the ways you feel it. If you don't need or benefit much from compression and I/O throughput being increased, UFS is still a good choice for performance and though it doesn't have checksums top to bottom its quite reliable too. Snapshots, boot environments, etc. work so good and easy that its hard to give up the package of ZFS and all its goodies for any other filesystem but others are still viable and sometimes even fit as a better choice.
 
Back
Top