Other Best filesystem for a shared partition between FreeBSD and Linux

Hi,

i am setting up a soon to be dual-boot system which i want to run FreeBSD 13.0 and a random Linux distribution on.
Partition scheme is GPT in case this matters.

I want to have a shared partition so i can access it from both systems (rw). What filesystem do you recommend?
My first thought was NTFS but this is nonsense since no Windows™ systems are involved.

Should i go for ext4 or something else?
 
I want to have a shared partition so i can access it from both systems (rw). What filesystem do you recommend?
My first thought was NTFS but this is nonsense since no Windows™ systems are involved.
If you plan on sharing users too (i.e user ID), then possibly ext2 or ufs. I think ext2 is better supported than more recent. UFS is also an umbrella name for a few different formats so could also pose a problem.

If you don't plan to share users, then I actually believe that NTFS isn't a bad choice. You wont run into so many permissions issues with it and it is more flexible in terms of file sizes compared to i.e fat32.

Of course in the ideal world you could use UDF. Unfortunately I don't believe I have seen one vendor who has implemented both read and write support.
 
If you plan on sharing users too (i.e user ID), then possibly ext2 or ufs. I think ext2 is better supported than more recent.
ext2 can be read and written by all Linuxes, BSDs and even Windows.
I use it for all my data transport drives.

ext3 and ext4 support is not common with all OSes, I agree.
NTFS is a four-letter word, like "s**t" (just my personal opinion).

UFS is also an umbrella name for a few different formats so could also pose a problem.
Of course in the ideal world you could use UFS. Unfortunately I don't believe I have seen one vendor who has implemented both read and write support.
There was another thread where some talk was about UFS.
Seems like there are as many UFS variants like there are BSD variants. Probably not a good idea to use for data transport :)
 
zfs(8) should be fine since we use a common codebase with the L*x'ers guys. Just watch for commonly supported zpool-features(7).
I've very happy sharing files between freebsd and linux using zfs. In my case it is not just a single partition but a raid10 array of 4 x 4TB disks. It is almost always running under freebsd but I need to be able to boot into linux for some experiments. In the past couple of years I've used that array from Gentoo, Debian, PopOS, and FreeBSD with no problems.

OpenZFS has been a happy development in file systems and since it has become available I would use zfs for a shared disk or partition for freebsd and linux.

Limitations:
1. I think that it is still necessary to create the zpool under freebsd. In any case it may be safest to do so. I use zfs from base.
2. When first accessing the pool in linux it will not import without the "force" option -- it's ok, just use
Code:
zpool import -f tank
 
In 12.x OpenZFS is not in base, but available in ports(7) IIRC. So that argument that you have the same source base holds true from FreeBSD 13.x onwards.
Right. I actually created my zpool under FreeBSD 12 (then "current") using the "Solaris" zfs from base. Nevertheless the Linux systems can import it and use it with their zfs 0.84 or zfs 2.0. With the newer zfs in FreeBSD 13 it may be possible to import a (OpenZFS) pool from Linux into FreeBSD. Anyone know?

For now, I've avoided upgrading my zpool to the latest features and for now I still recommend create the zpool under FreeBSD using the zfs from base (which is now some version of OpenZFS).

I've often found that Linux blasts ahead recklessly adding features and I wouldn't be surprised if their "OpenZFS" always has newer (and unecessary) features not yet in FreeBSD's "OpenZFS", so I avoid creating zpools in Linux and blindly assuming that their OpenZFS pool will be useable in FreeBSD. FreeBSD is the Gold Standard for zfs. See Migrating zfs from Linux to FreeBSD
 
Thank you guys, lots of useful answers here! 🥳

If you don't plan to share users, then I actually believe that NTFS isn't a bad choice. You wont run into so many permissions issues with it and it is more flexible in terms of file sizes compared to i.e fat32.
I indeed do not plan to share users. That's why i initially thought of NTFS.

I think i will do a coinflip between ext4 and NTFS. How well is read / write support and performance for each of them in FreeBSD?
 
I didn't extensively use NTFS in FreeBSD, but do use SD cards formatted with EXT4 almost every day. So far the only reliable implementation IMO is sysutils/fusefs-lkl. I switched to it when discovered that long symlinks get broken in others. There was another discussion thread here regarding EXT4 some time ago.
 
I use NTFS to exchange data between Linux and FreeBSD systems via external SSDs (Samsung T5); I think it’s the best choice for my use cases. In particular, NTFS is the default file system of several applicances that I use, e.g. a DVB-T2 receiver with PVR feature, and a HDMI recorder. Both are based on Linux under the hood. I use the sysutils/fusefs-ntfs port without any problems so far.

I also tried ext{2,3,4} in the past – while reading worked fine with FreeBSD, the writing support seemed immature and caused problems after some time.

By the way, there is also exFAT (sysutils/fusefs-exfat). It also works fine, but I think the performance is somewhat lower compared to NTFS. I use it to access the SD cards of my digial cameras.
 
I didn't extensively use NTFS in FreeBSD, but do use SD cards formatted with EXT4 almost every day. So far the only reliable implementation IMO is sysutils/fusefs-lkl. I switched to it when discovered that long symlinks get broken in others. There was another discussion thread here regarding EXT4 some time ago.
Did you file in a bug report about that? They can't fix what they don't know about...
 
Linux's older ZFS implementation is not 100% compatible with FreeBSD's ZFS implementation. I wouldn't use it for serious purposes. Hopefully OpenZFS will become the best option.
 
As with any post to a forum, this is primarily my opinion.

Linux is not the old one anymore... FreeBSD 13 shares the code base with OpenZFS 2.0, which has much more in common with ZfsOnLinux 0.8.x than FreeBSD 12's implementation.

That said, there are almost zero cases where ZFS isn't the right answer for "I'm making a new filesystem, what should I use?" at this point. (Those cases are primarily ones where you need to attach it to legacy / embedded systems that can't support ZFS, or systems with very small (<4G) memory footprints.)

Top highlights that make it worthwhile:
* snapshots. (Make sure to set up an auto-snapshot tool like sysutils/zfstools).
* data integrity; built-in checksums and scrub.
* transparent compression (lz4 is fast enough to make using compression your default choice, and can make your drive "faster" depending on use cases.)
* send/recv for backups, a total game changer for large filesystems.
* [13.0+] built in encryption
* Cross-platform -- linux, FreeBSD, MacOS (not as mature, but actively worked on)

If you're doing more than one drive, then there are other benefits:
* Raidzn, mirrors, stripes — all managed from bits-on-the-disk to bytes-in-your-file by one system, not separate RAID and filesystem providers.

And so many others depending on use case; boot environments, zvols, multi-level caching, ...
 
My experience is that openzfs in freebsd is not stable whereas the base zfs is.
Sharing zfs drives between linux and freebsd works fine and gives the best performance and stability.
I have used ZFS as my main filesystem for years. Used same pool on Linux and FreeBSD. Only time things could go wrong were if i upgraded pool features , and FreeBSD didnt support them (happened few times :D).

Now Im on FreeBSD 13 (OpenZFS in base) with compression=zstd pool for few weeks doing just standard desktop/jails/torrents/compiling. Didnt observe anything going wrong.
 
UFS2 is read-only on linux, afaik. Ext2/3/4 have tools (FreeBSD have ext2fs.ko and
sysutils/e2fsprogs
) in both OS, and have journal, so IMO is the best option if no windows is involved.
If there's windows, there's sysutils/fusefs-ntfs but eh.. NTFS at least have some kind of journal so still better than that FAT32.
 
I have never seen freebsd-ufs being readable from linux. So that's a no go to share data.
xfs is an idea but fusefs-lkl has some serious limitations.
 
I have never seen freebsd-ufs being readable from linux.
It is definitely readable on Linux, no problem with that. Automount will fail because mounting UFS on Linux needs to specify one of the 10 UFS types supported, which the automounter cannot guess for you, so you have to mount it manually from command line.


The Linux kernel offers read-write support for both FreeBSD UFS1 and UFS2, but write it is disabled by default so most distributions support UFS read-only and you have to build a custom kernel if you want write support. openSUSE does have it enabled by default, some others may do as well but I am not aware of it.

That being said, I have no clue whether SU+J is supported and cannot find any relevant information. I will try and tell you.
 
Personally, I would not trust writing any file system using an implementation that's not done professionally by the first parties. Meaning: To write to FreeBSD's UFS, you ought to have your code reviewed by Kirk McK, to write ext2 you same with Ted Ts'o. File system metadata is too complicated to leave the writing to reverse engineering, grey box, and amateurs. Readonly support is fine (reading can't break anything permanently), but writing is scary.

Here would be my suggestion for the shared partition: Either use a file system that really has shared source code (the only practical example I know of is OpenZFS). Or use a very simple file system such as FAT, but don't trust it very much, only for transfer of data. Or use only the native OS to read/write the file system. That could for example be done by making the shared partition an ext4 partition, using native mount on Linux. When running FreeBSD, start a small virtual machine in FreeBSD that runs Linux and reads/writes the shared partition, and exports it to a BSD mount using a network protocol such as NFS (SMB would work too, but NFS is easier to set up between Linux and *BSD). Or: Move the shared disk to a separate device on the network, and serve it using a cheap network server (for example a Raspberry Pi) using NFS.
 
I tried the following little test:
* create a new UFS2 SU+J filesystem on a USB stick with FreeBSD 12.2
* put various files onto it
* mount it read-write on openSUSE Leap 15.2
* delete a few files and add a few others
* mount it on FreeBSD again

Everything worked as expected. openSUSE could read existing files, delete them and write new ones. Back on FreeBSD, deleted files were effectively deleted and all other files were there and readable. This says nothing about reliability in the long run, but at least the basic functionality is there.

The only surprise was that the UFS kernel module, like many other file system modules (basically all filesystems that are uncommon in the Linux world, but also less exotic ones like JFS and F2FS), is blacklisted by default on openSUSE. This means that to be able to mount my stick I had to manually load the module with # modprobe ufs. To have it automatically started you need to edit /etc/modprobe.d/60-blacklist_fs-ufs.conf

It is clearly stated that file systems not enabled by default are not officially supported by SUSE and you are free to use them at your own risk. While I would personnally not use this solution for valuable data without better testing before, it is probably not more dangerous than using ext4 on FreeBSD.
 
Good to know that it basically works. And we agree: the fact that it worked once doesn't imply that it is trustworthy in the long term.

One more hint, from experience shooting myself in the foot: If you have a shared file system accessed by two OSes on the same machine, remember to UNMOUNT it before switching to the other OS. A long time ago, I had a shared NTFS file system on my laptop, which was being rebooted between Windows and Linux. Except that I was not actually rebooting the two OSes, I was just hibernating them. When you hibernate an OS, it doesn't actually unmount, because there was no shutdown. That means that some of the data that's supposed to be written to the file system is actually not on disk yet, but held in the RAM of the file system. And when the other file system starts, it sees that (a) the shared file system was not cleanly unmounted, and (b) the newly written file is not on it. So always unmount/remount when switching OSes, or just do a full shutdown/reboot cycle.
 
Back
Top