Other Filesystem selection on *BSD and GNU/Linux

Hi all there!

I'd like to know from the community which FS should I select for maximizing compatibility among *BSD systems and GNU/Linux. At least I'd like to be able to seamlessly use a FS on both FreeBSD and GNU/Linux with the only requirement that I do not rely on FUSE. The device targets are a bunch of external USB hard drives that I got. I've been extremely happy with UFS (or UFS2) in my machine (which honestly I'd have to recheck which one I'm currently using) but apparently UFS/UFS2 implementations among different OSes are incompatible (I've read somewhere that for example a fsck from OpenBSD to a FreeBSD UFS can destroy all the data, or the other way round, can't recall with precision now because are basically different implementations). I've also read that a good choice could be ZFS but then I've found that you need to be careful about "zpool versions" (note aside: I'm completely unfamiliar with ZFS, only read basic information about it). According to Wikipedia one of the most popular zpool versions is 5000 (which seems to be the same for FreeBSD and GNU/Linux, is this correct?), so in principle I could select ZFS. But what I don't really know if that is going to be the case in the future, which guarantees do I have that the zpool version 5000 is going to be available for both systems? What if at some point FreeBSD and GNU/Linux diverge from zpool versions? Maybe I'm just worrying too much about a minor issue but I found selecting a FS for the *NIX world (leaving out MS Windows is absolutely ok) is (at least for me) not a trivial task. Note that the most important thing for me are: compatibility (if I could get a FS that works on both *BSDs and GNU/Linux excellent, if not for the time being FreeBSD and GNU/Linux is just fine. OpenBSD does not seem to natively support ZFS, possibly due to licensing issues?) and secondly not relying on FUSE. Any suggestions/insights/tips are much appreciated! Thanks a lot for taking the time to read my post!

Regards,
Lucas.
 
Ext2/3/4 are supported by ext2fs.ko in FreeBSD with r/w support and a decent performance.
 
Menelkir thanks for that. Since you are suggesting any of those three, which one would you choose? Also doing a `man ext2fs` I do see that journalling is not supported at all. Can this be an issue? For example let's say I select ext4, can GNU/Linux activate the journalling automatically and then affect the FS structure so I could then face issues in FreeBSD or any other system? I'm aiming to kind of "plug and play" (just plugging the drive without caring about `mount` parameters if I reconnect the drive in other OS). I've also read in many places that the ext* family of FS are considered a joke if compared to either UFS or ZFS (what exactly that means: "a joke fs" is something that's beyond my knowledge, so I can't really say the same simply cause I do not know. But I haven't had any problems in the past when I was a GNU/Linux user. On the other side I've also read that UFS is rock solid compared to ext* fs family, which basically that could be translated as more robust and reliable than ext*?).
 
Menelkir thanks for that. Since you are suggesting any of those three, which one would you choose? Also doing a `man ext2fs` I do see that journalling is not supported at all. Can this be an issue? For example let's say I select ext4, can GNU/Linux activate the journalling automatically and then affect the FS structure so I could then face issues in FreeBSD or any other system? I'm aiming to kind of "plug and play" (just plugging the drive without caring about `mount` parameters if I reconnect the drive in other OS). I've also read in many places that the ext* family of FS are considered a joke if compared to either UFS or ZFS (what exactly that means: "a joke fs" is something that's beyond my knowledge, so I can't really say the same simply cause I do not know. But I haven't had any problems in the past when I was a GNU/Linux user. On the other side I've also read that UFS is rock solid compared to ext* fs family, which basically that could be translated as more robust and reliable than ext*?).
I would suggest ext4 because have more features and optimizations than the previous versions. About the journal, yeah it could be a problem, but since you need for something you mount, copy stuff and dismount, doesn't seems to be much of an issue.
UFS can be mount R/O on linux, but probably isn't what you want (you can mount an ufs2 on linux using mount -t ufs2 -o ufstype=ufs2 /device /mountpoint).
 
In general, I would stay away from any "non-native" file system implementations. What I mean by that is: For example, ext 2/3/4 was developed on Linux, and the FreeBSD implementation does not share the same source code. The same happens with NTFS (the real implementation is in Windows, both the Linux and *BSD versions were done after the fact), and UFS (where the Linux version is a clone with limited functionality). The problem is that the semantics (the meaning) of on-disk metadata in a file system can be very complex, and the order of operations (what to update first) is very important in case of a crash (because fsck or remount has to pick up from a partial operation). These details are hard to get right in a re-implementation.

I have two suggestions. The first one is to use ZFS as the shared file system between Linux and FreeBSD. That's because today (starting with version 13), FreeBSD and Linux use the same ZFS source code, so compatibility is pretty much. And zpool versions and feature flags are backwards compatible: If you use a pool version that both can use today, and restrict your use of feature flags to the common subset, then in the future new versions are highly likely to still be use the one you chose today.

The second suggestion is to only access the file system from a single OS. One way to do that is to either have a dedicated computer that you use as a file server, or perhaps use a small VM which can run from either OS. That file server can then be accessed using something like NFS or Samba.
 
ralphbsz thanks for stepping into the discussion. I was actually thinking about this setup (the NFS part, independently of the underlying FS, which is something that is useful to me as I'm interested in plugging in these drives and make them available to the network). Can you give me more information about why you mentioned: "The second suggestion is to only access the file system from a single OS."? Did you mention this just because is handy to access the FS through NFS or actually because there could be complications when physically moving the drive to other systems?

Regards,
Lucas.
 
What I really tried to say: If you use a file system that is native to one OS (like ext2/3/4 is native to Linux, UFS/UFS2 to BSD, NTFS to Windows, ...), please use it only on that native system. Do not use after-the-fact implementations on non-native systems; while the folks who write those are honorable and try hard, it is just too difficult.

Which then leaves you with the nasty question: How to access that disk? Well, run the native OS (for example Linux for an ext2/3/4 file system) at all times, and use some other way to access the data. With VMs being easy and common, and NFS and Samba being well tested and debugged, that's extra work and tedious, but not impossible.
 
You have not provided any information about the applications using your disks of the physical location(s) of your systems... However...

Moving physical disks between different operating systems is a recipe for future (ongoing) headaches. I would not choose to do it routinely unless I felt that there was no other option. Most file system formats are native to just one operating system. Because code changes to file systems on the native operating system are not likely to be tested across different (non-native) operating systems, the risk of damage at some time in the future is just too high. If I still wanted to do it, I would use ZFS, exercising caution around software version settings (I do genuinely think that this would work well, because of the shared ZFS code base, provided you exercised appropriate caution.)

To share beyond FreeBSD and Linux (and only Ubuntu has native ZFS support -- other Linux variants require you to configure, build, and install a "custom" kernel), consider building a central ZFS storage server.

Your ZFS server can provision storage on the LAN via NFS exports, and iSCSI using ZFS ZVOLs.

There's a performance penalty for using Ethernet protocols for "disk" storage, and that's massive if you need synchronous writes (a suitably configured ZIL ameliorates that). But the big payback derives from centralised management, shared capacity, expansion, and backup.

Ethernet-resident storage servers aren't suitable for all storage requirements, but a ZFS server allows you to put a big chunk your (storage) eggs in one basket, with the opportunity to watch and curate that basket, carefully.
 
[...]
I have two suggestions. The first one is to use ZFS as the shared file system between Linux and FreeBSD. That's because today (starting with version 13), FreeBSD and Linux use the same ZFS source code, so compatibility is pretty much. And zpool versions and feature flags are backwards compatible: If you use a pool version that both can use today, and restrict your use of feature flags to the common subset, then in the future new versions are highly likely to still be use the one you chose today.
Just to emphasize: verify that the (different) OS-es support the same ZFS feature flags; preferably: the same OpenZFS version. This applies especially when upgrading an OS: do not use zpool upgrade -a unless you are absolutely sure that both OS-es support any newly introduced ZFS feature flags and you also do not want to be able to return to a (previous) BE (Boot Environment) that does not have these feature flags. After this zpool command you cannot return to a previous incarnation of the feature flags.
 
All right! Thanks a lot for the suggestions so far! Yesterday I did the first steps towards using ZFS (ZFS related man pages are... intimidating?). First thing learnt: feature flags when running `zpool create`. To my dismay zpool from Ubuntu 20.04 (the machine I do regular work everyday) created the pool with two flags that I never explicitly told to use: "feature@project_quota" and "feature@userobj_accounting". I was immediately warned when I re-plugged the drive to my FreeBSD machine telling me that I was only going to be able to import it read-only. VERY disappointed with that behaviour from Ubuntu. I haven't tried yet, but my bet is that FreeBSD won't inject any feature flags "presents" for free when creating a pool :). Then I destroyed the pool and re-created it EXPLICITLY disabling those flags. So yes, Erichans is 100% correct about that one! I think that for my particular use case sticking to creating pools without extra features and never upgrading the pools on these external drives is going to be rule number 1. I think in the end I will also add the NFS layer as this will provide the "glue" for any other systems that still do not provide ZFS support to be able to read and write these external drives. gpw928 I'm using these drives to store non critical data that I might want to share from time to time with other people (hence my initial requirement to be able to use the drive elsewhere so I can easily read/write the data on other user's computers). Once again thanks a lot everyone's tips and suggestions and hope maybe this is useful for other users that face a similar use case!

Regards,
Lucas
 
I tried ext4 with the native FreeBSD driver. Txt files got corrupted on it, and sometimes showed up as binaries, especially after an improper shutdown. Improper shutdowns aren't supposed to happen, but the files are still supposed to be more resilient than that. Not sure if ext2 or ext3 are better on this. ext4 is good for an intermediary between FreeBSD and Linux, but not for long term storage.

sysutils/e2fsprogs would be needed to check and clean the filesystem. Automounting this filesystem on bootup may cause problems for an unclean filesystem.


Update: On FreeBSD, ext4 even messes up files on clean reboots, perhaps even without reboots. It may be a problem if the file was left open on the terminal during a clean reboot. That doesn't happen on UFS. Maybe ext3 would be better than ext4, as it's said to be more stable, and it also has journaling. ext2 is considered stable, but doesn't have journaling. ext2 is considered for flashdisks, but DOS formats are more universal for that. Not sure those safety features of ext3 would work on FreeBSD.
 
Last edited:
… I haven't tried yet, but my bet is that FreeBSD won't inject any feature flags "presents" for free when creating a pool …

A few are enabled by default:
  • allocation_classes
  • async_destroy
  • bookmark_v2
  • bookmark_written
  • bookmarks
  • device_rebuild
  • device_removal
  • draid
  • edonr
  • empty_bpobj
  • encryption
  • filesystem_limits
  • large_blocks
  • large_dnode
  • livelist
  • multi_vdev_crash_dump
  • obsolete_counts
  • redacted_datasets
  • redaction_bookmarks
  • resilver_defer
  • sha512
  • skein
  • zilsaxattr
  • zpool_checkpoint
  • zstd_compress
Code:
% uname -KU
1400059 1400059
% tail -f -n 0 /var/log/messages
May 28 00:20:36 mowa219-gjp4-8570p-freebsd kernel: ugen1.10: <Verbatim STORE N GO> at usbus1
May 28 00:20:36 mowa219-gjp4-8570p-freebsd kernel: umass4 on uhub6
May 28 00:20:36 mowa219-gjp4-8570p-freebsd kernel: umass4: <Verbatim STORE N GO, class 0/0, rev 2.00/1.00, addr 10> on usbus1
May 28 00:20:36 mowa219-gjp4-8570p-freebsd kernel: da4 at umass-sim4 bus 4 scbus7 target 0 lun 0
May 28 00:20:36 mowa219-gjp4-8570p-freebsd kernel: da4: <Verbatim STORE N GO PMAP> Removable Direct Access SCSI device
May 28 00:20:36 mowa219-gjp4-8570p-freebsd kernel: da4: Serial Number 07B7050762213D03
May 28 00:20:36 mowa219-gjp4-8570p-freebsd kernel: da4: 40.000MB/s transfers
May 28 00:20:36 mowa219-gjp4-8570p-freebsd kernel: da4: 7640MB (15646720 512 byte sectors)
May 28 00:20:36 mowa219-gjp4-8570p-freebsd kernel: da4: quirks=0x2<NO_6_BYTE>
^C
% lsblk da4
DEVICE         MAJ:MIN SIZE TYPE                                          LABEL MOUNT
da4              0:134 7.5G -                                                 - -
% su -
Password:
root@mowa219-gjp4-8570p-freebsd:~ # gdisk /dev/da4
GPT fdisk (gdisk) version 1.0.9

Partition table scan:
  MBR: not present
  BSD: not present
  APM: not present
  GPT: not present

Creating new GPT entries in memory.

Command (? for help): n
Partition number (1-128, default 1):
First sector (34-15646686, default = 2048) or {+-}size{KMGTP}:
Last sector (2048-15646686, default = 15644671) or {+-}size{KMGTP}:
Current type is A503 (FreeBSD UFS)
Hex code or GUID (L to show codes, Enter = A503): a504
Changed type of partition to 'FreeBSD ZFS'

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/da4.
Warning: The kernel may continue to use old or deleted partitions.
You should reboot or remove the drive.
The operation has completed successfully.
root@mowa219-gjp4-8570p-freebsd:~ # zpool create pusscat /dev/da4p1
root@mowa219-gjp4-8570p-freebsd:~ # zpool get all pusscat | grep \ enabled | sort
pusscat  feature@allocation_classes     enabled                        local
pusscat  feature@async_destroy          enabled                        local
pusscat  feature@bookmark_v2            enabled                        local
pusscat  feature@bookmark_written       enabled                        local
pusscat  feature@bookmarks              enabled                        local
pusscat  feature@device_rebuild         enabled                        local
pusscat  feature@device_removal         enabled                        local
pusscat  feature@draid                  enabled                        local
pusscat  feature@edonr                  enabled                        local
pusscat  feature@empty_bpobj            enabled                        local
pusscat  feature@encryption             enabled                        local
pusscat  feature@filesystem_limits      enabled                        local
pusscat  feature@large_blocks           enabled                        local
pusscat  feature@large_dnode            enabled                        local
pusscat  feature@livelist               enabled                        local
pusscat  feature@multi_vdev_crash_dump  enabled                        local
pusscat  feature@obsolete_counts        enabled                        local
pusscat  feature@redacted_datasets      enabled                        local
pusscat  feature@redaction_bookmarks    enabled                        local
pusscat  feature@resilver_defer         enabled                        local
pusscat  feature@sha512                 enabled                        local
pusscat  feature@skein                  enabled                        local
pusscat  feature@zilsaxattr             enabled                        local
pusscat  feature@zpool_checkpoint       enabled                        local
pusscat  feature@zstd_compress          enabled                        local
root@mowa219-gjp4-8570p-freebsd:~ # zpool destroy pusscat
root@mowa219-gjp4-8570p-freebsd:~ # zpool labelclear /dev/da4p1
root@mowa219-gjp4-8570p-freebsd:~ # zfs --version
zfs-2.1.99-FreeBSD_gc0cf6ed67
zfs-kmod-2.1.99-FreeBSD_gc0cf6ed67
root@mowa219-gjp4-8570p-freebsd:~ #
 
grahamperrin Thanks for taking the time doing that. So I guess I was wrong. Despite this, I created the ZFS pool in FreeBSD moved and then imported it there and no complaints. Maybe these are a set of features present in all ZFS pools (or ZFS versions)? I have no idea as I'm completely new to ZFS.
 
Back
Top