Other Advice on how to structure the storage for dual-boot FreeBSD and Linux and common ZFS home

With the release 14.0, I plan to switch to FreeBSD as my main OS.
Unfortunately, some of the programs I use work only on Linux, so I will still have to boot to Linux from time to time.

I am wondering how to structure my storage.
In my notebook, I can place 4 disks in total, 3 SSDs (2 NVMe M.2 + 1 SATA) and 1 SD Card.
I want to use ZFS and have my home directory on one of the SSD disks.
However, I am still not sure whether it would be better to use NVMe or SATA.
I will have to mount file system from my home disk under FreeBSD and Linux.

As the SD Card has much lower throughput than SSD, the only thing I can potentially accept on SD Card is Linux.
On the other hand, FreeBSD + Linux + Home Directory require only 3 disks, so I could do without the SD Card at all.

The ZFS is a robust file system, and I am considering mirroring my home disk.
On the other hand, I can live with simply backuping my home disk manually once a month to an external drive using snapshots.

I also know some people use separate disk for /usr or /var (because of convenience/performance? I am not sure).

There are multiple possible configurations, and I struggle to decide on one, probably because of lack of experience.
I was wondering how you would structure the storage in my case or what other aspects you would consider.
 
If you want to share a zpool between Linux and FreeBSD booted at different times, one or the other will not boot because the pool will belong to the other (offline) system. When booting FreeBSD you will need to boot single user and zpool import -f the pool. On the Linux system you will need to do the same (if it will let you).

Alternatively you could copy your /etc/zfs/zpool.cache on the FreeBSD partition to the Linux partition. zpool.cache contains the UUID of the zpool it expects to import.

I do this on my sandbox machine which has multiple FreeBSD partitions, one for each version {15, 14, 13, 12} {amd,i386} with no problems whatsoever. Though on occasion a new UUID is written to the zpool when I need to recreate it (it's my sandbox machine). Then I simply copy the file to each of the other partitions, all of which are UFS. I run a mixed UFS and ZFS environment here.

Stay away from SD cards. Their life expectancy is poor. You will lose your data at some point. Simply put your shared zpool in its own partition. Or better yet, use ext4. I tried zfs on Fedora. At some point there was an update to the kernel and I lost access to my zpool. The data wasn't important as it was a play area. I'd suggest using filesystem that is fully supported by Linux. For example:

My laptop has a Windows 10 partition and two FreeBSD partitions with my main FreeBSD system, a zpool, and two FreeBSD sandboxes (yes 3 FreeBSD systems on two partitions). The last partition contains a FAT filesystem for transfer of data from FreeBSD to Windows and back. Not that I use Windows a lot, I don't. But on the occasion I do need to transfer data I have a 7.5 GB FAT filesystem for that purpose. Anything larger would need to be fetched from one of my servers downstairs. You could use the same approach between FreeBSD and your Linux distro of choice.
 
In my notebook, I can place 4 disks in total, 3 SSDs (2 NVMe M.2 + 1 SATA) and 1 SD Card.
Keep in mind all equipment draws battery power and adds weight. My laptops go with one SSD even though there are slots to fill.​
[…] have my home directory on one of the SSD disks. […] I will have to mount file system from my home disk under FreeBSD and Linux. […]
Well, I guess your Linux-only programs do not need access to all home data, so as inconvenient as it may be I’d manually manage data transfers. It really depends on how often “time to time” is.​
[…] I am considering mirroring my home disk. […]
RAID is great if something needs to continue running in a degraded state, e. g. some network server, and (ideally) you can hotswap hard drives. On a laptop, however, you probably just stop working if a disk fails. Given regular backups, immediate downtime is not a catastrophe.​
[…] I also know some people use separate disk for /usr or /var (because of convenience/performance? I am not sure). […]
You can deduplicate and share data that is the same for all computers via network (/usr, UNIX shared resources) and put frequently accessed data on a fast storage (/var like variable).​
[…] Stay away from SD cards. Their life expectancy is poor. […]
I second that. Great for occasional accesses (e. g. digital camera), unsuitable for constant RW ops (OS and/or data).​
 
Keep in mind all equipment draws battery power and adds weight.
These are things I actually do not care about. My notebook is always plugged to the power source, and the weight of the disks is rather small compared to other stuff I need to carry in my backpack.

Do you think it is better to have system on NVMe and data on SATA, or the other way?
During the daily work I mostly read/write user data. One the other hand, the programs I need to launch are huge.
 
Dear mkru,
I have a laptop with a single NVMe and dual boot FreeBSD and Debian. I use a ZFS pool on one partition to share HOME.

CORRECTION: I share only a directory mounted on HOME. Please see post #13

I have setup the pool running FreeBSD-13.2-RELEASE. During installation of ZFS on Debian I have been asked if I want to upgrade the pool. I have chosen not to upgrade. Up to now I did ZFS management stuff as setting bookmarks on FreeBSD only.
Do you think it is better to have system on NVMe and data on SATA, or the other way?
I think it is a good idea if the data disk could be attached easily to a different computer. The NVMe disk I have has a very tiny connector and the SATA ones a more robust type. Therefore I would prefer SATA for the data.
 
It will work fine if you put a `zpool import -f`` into /etc/rc.local

Note that by default Linux ZFS creates zpools with features that can't be read by FreeBSD. Do this in Linux.
Code:
 -o feature@userobj_accounting=disabled \
    -o feature@edonr=disabled \
    -o feature@project_quota=disabled \
    -o feature@allocation_classes=disabled \
    -o feature@resilver_defer=disabled \
    -o feature@zilsaxattr=disabled \

Cross-check that your pool can actually be imported by both OSes before spending time populating it.
 
On FreeBSD, Linux or both?

I didn't know about this. Do you know if there is any source of information on differences between ZFS on FreeBSD and Linux?

Both. You could also add a `zfs export ...` to the shutdown rc, but then that doesn't work if you ever have a crash.

I don't think so. I use the enabled features as a "reference". But I never felt the use for any of the features that are not both in Linux and FreeBSD.
 
I used zfs export and zfs import in a script started by /usr/local/etc/rc.d/zfstank. On Debian I wrote a similar systemd unit. But that is not the complete story.

You have to delay cron jobs which access your home directory - also if it is just by mailing results triggered by the mailto statement in the headline. Tools as fetchmail should be started only after the pool has been imported. The same applies in the different direction when the pool has to be exported. You might have to kill gpg-agent before you can export the pool. There may be more to consider depending on which software you run.

I took that path without forced import of the pool because I like to avoid forced operations.
 
If you want to share a zpool between Linux and FreeBSD booted at different times, one or the other will not boot because the pool will belong to the other (offline) system. When booting FreeBSD you will need to boot single user and zpool import -f the pool. On the Linux system you will need to do the same (if it will let you).
Thanks for sharing this crucial tip. I am planning to share zpool between linux and freebsd in the near future. Do you happen to know why zpool import -f is necessary? What does it do exactly?

If I guessed correctly, zfs records somewhere some metadata about zpool, if we mount it in an operating system, the metadata changes. After which when another operating system mount it, it notices the change of metadata, then refuses to mount it?
 
zpool import -f is needed because the pool is "owned" by another machine's (or partition's) import.

Yes, as I said in my reply above in this thread, the metadata is stored in /etc/zfs/zpool.cache. You could copy that file to your Linux system. I copy that file to the various FreeBSD sandbox partitions on my sandbox machine and laptop (which also has a couple of extra FreeBSD sandbox partitions). Otherwise you will need to boot single user, mount all your non-ZFS filesystems, zpool import -f, and either continue the boot or reboot.
 
I have to correct myself. I have separate home directories on FreeBSD and on Debian. I share a directory ~/.tank. In the home directories entries as ~/Maildir have a symbolic link to ~/.tank/Maildir. In case the zpool import fails I can still login.
 
Did you try the Linux compatibility layer to run those programs?
I have tried, I have also tried running them in Linux jails. These are huge EDA tools. Not so easy to run them on unsupported OS.
 
I tried zfs on Fedora. At some point there was an update to the kernel and I lost access to my zpool. The data wasn't important as it was a play area. I'd suggest using filesystem that is fully supported by Linux.
What do you mean by loss of access to zpool?
  1. Were you simply unable to boot from this zpool (because the ZFS modules for Linux were updated incorrectly)? But if you booted from livecd would you be able to import this pull?
  2. Was the data on this pool lost forever due to a simple kernel update?
Am I correct in understanding that you think ZFS on Linux is so unstable and unnatural that ext4 would be even better? (quite acceptable ).
 
It seems like you need to make some conceptual technology choices that concern the quality of everything you use. Let's see what we have as input:
  • Only one computer. It's nice to have storage redundancy (but these aren't industrial grade devices and can't be reliably cooled). NVMEs can't be properly cooled because it's a laptop and this can lead to their breakdown.
  • Storage devices are heterogeneous and difficult to use in a single ZFS pool.
  • Most likely there is no ECC memory (although there are laptops with ECC). By the way, you didn't say what memory size is available.
  • You are considering the SD card as storage.
  • In your question you indicated at least some reasons for using Linux ("some of the programs I use work only on Linux"), but did not indicate reasons for using FreeBSD.
FreeBSD requires a serious look at things. Why should FreeBSD share a computer with Linux at all? This is why I personally uncompromisingly use only FreeBSD on my computers and believe in this system. So... Throw away the SATA SSD and SD card, and on 2 NVMEs you could make a ZFS mirror and install a Linux distribution there. This way, hardware/software reliability will be at least more or less consistent.
 
By the way, you didn't say what memory size is available.
64 GB
In your question you indicated at least some reasons for using Linux ("some of the programs I use work only on Linux"), but did not indicate reasons for using FreeBSD.
I am simply fed up with Linux mess, I have tried FreeBSD and I find it more coherent and intuitive.
 
When I started with Linux I had a dual boot with Fedora and Wadnows. My documents were on a separate partition and mounted to /home. Formatted as FAT32, is was readable from both operating systems. Worked well. I have no idea how to do so with ZFS, but it should be alike.
 
Shared /home on the EFI partition would achieve that with nothing shocking.

I use a FAT32 thumbdrive alot for transferring. Universally accepted.

Don't you lose some UNIX file permissions/ownership data though? They are a filesystem attribute right?
 
Yeah, FAT32 has no Unix permission and ownership layers, not to speak of extended attributes.

You also can't save large files (2 GB or was it 4 GB?) to such a drive.

The /home on ZFS plan works if you follow some remarks from this thread.
 
What do you mean by loss of access to zpool?
  1. Were you simply unable to boot from this zpool (because the ZFS modules for Linux were updated incorrectly)? But if you booted from livecd would you be able to import this pull?
  2. Was the data on this pool lost forever due to a simple kernel update?
Am I correct in understanding that you think ZFS on Linux is so unstable and unnatural that ext4 would be even better? (quite acceptable ).
It could boot (from ext4) but an incompatible KBI change resulted in the ZFS driver unable to load. The vendor did not support ZFS. It was fetched from zfs-on-linux.

The pool was lost forever. I didn't try to hard to recover it. It was a throwaway play VM at $JOB.
 
Back
Top