Unix Filesystem for harddisk for both FreeBSD + Linux?

I'd tend to agree with Snurg and obsigna on this - for a couple reasons, the first being that if the drive is a portable USB drive, it may be connected to multiple computers that are themselves possibly portable, and not all of those devices may be OK with more powerful/complicated file systems. I'd tend to use EXT2 or EXT3 in this circumstance.

EXT2 might not be a good option if there's very much writing (has no journaling), but may be OK for personal mainly read usage. For read/write, EXT3 has journaling, and might be better IMO. Linux based cell phones use EXT3 or EXT4, and they all use some form of NAND based storage. Unfortunately, there's no EXT4 for FreeBSD. No way I'd ever use *FAT* for this. Eventually it would be nice to see BeFS ported for this purpose, as it has some great features.

Some EXT2/USB users are suggesting that fstab should be set to mount everything with "noatime" option. No matter what, eventually all USB will fail, so should have a good backup plan.
 
I will probably use ZFS, but how much work and knowledge of FreeBSD internals are needed to support write on XFS or EXT4 ?
 
I corrected that sentence for you ;)
A FS that wasn't designed as being production safe from the beginning and whose devs plain out refuse to learn from errors of previous filesystems, should NEVER be trusted to hold any data. It's nothing more than an object lesson for the NIH syndrome.
... and its used in production in many most critical for big companies environments, as / (root) filesystem and other filesystems under SLES (SUSE Linux) for SAP HANA deployments ;)
 
I will probably use ZFS, but how much work and knowledge of FreeBSD internals are needed to support write on XFS or EXT4 ?
Its easy (even automount) for EXT4, with writes using ex4 over fuse. From what I recall XFS is not usable on FreeBSD.
 
... how much work and knowledge of FreeBSD internals are needed to support write on XFS or EXT4 ?
Developing an in-kernel file system is very difficult. Using fuse makes it just difficult; at least it takes much of the kernel programming out of the equation; still keeping file system data structures organized in one's brain is tough. But developing a 100% compatible read-write implementation of an existing file system, in a case where the on-disk data structures and the invariants for updating them are not formally documented, but one has to read the existing code to determine the on-disk design, is extremely difficult. It is a task that should probably be left to the original implementors of the file system, or to people who are in close contact with them.

(about BtrFs):
... and its used in production in many most critical for big companies environments, as / (root) filesystem and other filesystems under SLES (SUSE Linux) for SAP HANA deployments ;)
Btrfs is a machine for losing data. I don't know what Suse was thinking when they made it the default. But consider this: the /root file system is not of great importance. If it is destroyed, one can reinstall, or restore from a backup. The real user data for SAP Hana is not stored on the root file system of the servers (duh).
 
(about BtrFs):

Btrfs is a machine for losing data. I don't know what Suse was thinking when they made it the default. But consider this: the /root file system is not of great importance. If it is destroyed, one can reinstall, or restore from a backup. The real user data for SAP Hana is not stored on the root file system of the servers (duh).
You missed the part "and other filesystems" ;)
 
I corrected that sentence for you ;)
A FS that wasn't designed as being production safe from the beginning and whose devs plain out refuse to learn from errors of previous filesystems, should NEVER be trusted to hold any data. It's nothing more than an object lesson for the NIH syndrome.
No, man. You gone too far. Btrfs only bad for USB. On HDD, I used Btrfs for one big root / and very pleased with btrfs subvol snapshot ;) Perhaps I've never come across the situation when send/receive over network is needed so I think Btrfs subvol snapshot is more than enough for me. One minor problem with Btrfs is very quickly slow down over time :)
 
No, man. You gone too far.
A FS where the documentation still warns about using a shipped and activated feature and the status of the bug for a long time just was (or still is??) "we have no idea why it destroys your data, but we still keep it" is a toy project at best, but not a file system that should be even considered being used on live systems.
Also: the fast performance degradation problem (IIRC caused by write holes and fragmentation) is as old as btrfs itself and hit me within a few weeks back when I tested it the first (and last) time (~3-4 years ago IIRC). If this still isn't fixed, btrfs is an even bigger dumpster fire than I imagined...

Perhaps I've never come across the situation when send/receive over network is needed
Wow, this is still not available for btrfs?? What's the point of features like snapshots and built-in data resiliency, if I can't make proper incremental backups preserving these safeguards and have to revert back to dump or tar?


Regarding SUSE:
I don't know why they still try to cope with that can of worms; even RH dropped their support for btrfs installations and they put some serious money in that project...
We have 2 3rd party application servers running with SLES in VMs - both use traditional ext3/4. I've had a chat with the support guy who did the last major upgrade for one of those over lunch, and he also talked a bit about their internal evaluations for using btrfs - essentially it caused way more problems than it could have ever solved for them, so they just abandoned any efforts using it. In fact they currently evaluate ZFS for user data on their bare-metal appliances and he was very interested in how we use ZFS within our infrastructure ;)
 
Regarding SUSE:
I don't know why they still try to cope with that can of worms; even RH dropped their support for btrfs installations and they put some serious money in that project...
We have 2 3rd party application servers running with SLES in VMs - both use traditional ext3/4. I've had a chat with the support guy who did the last major upgrade for one of those over lunch, and he also talked a bit about their internal evaluations for using btrfs - essentially it caused way more problems than it could have ever solved for them, so they just abandoned any efforts using it. In fact they currently evaluate ZFS for user data on their bare-metal appliances and he was very interested in how we use ZFS within our infrastructure ;)
From what I have heard Red Hat dropped BTRFS because Red Hat only has XFS developers and none BTRFS developers, so they could not modify/develop BTRFS the way they wanted, also they invested heavily in XFS already ...but XFS is dead end (no data consistency, no compression, no deduplication, nothing ...), but its like Red Hat works, they do not provide latest technology, they only change for subscriptions :p
 
If you are looking for a common file system between both Linux and FreeBSD, there are some options. Many seem to like ZFS. That's all well and good, but now I am going to give you the one that is guaranteed to work on FreeBSD, Linux, and even Windows and Mac OSX if you choose, as they all support it relibly...

That file system is FAT-32. You cannot go wrong with it. It's supported everywhere. All the BSD's support it. The Linux kernel supports it so over 200 Linux distributions have it. Max OSX has it for interoperability with preformatted flash drives. Windows supports it because...well...Microsoft originated it. I think they all support FATex also which is getting pretty common on larger thumb drives.

One other thing... You will find that the performance of a USB interfaced harddisk is not very good. I did this for awhile and decided to start using the firewire interface. Even though the USB spec says it's faster, in practice FireWire is faster because FW uses DMA data transfers while USB uses PIO transfers. I don't know if this has changed over the years. Better yet, some machines have an external SATA port you could connect the HD to. That's the best of both worlds: Portability and performance. I use a bare HD with a hot-swap bay to d backups and take my data with me.
 
I think you mean FAT here. Anyway, there is split() to solve that.

Thank you. Besides it is not normal that EXT2 is still in the Linux distributions, because it is very old. Today EXT4 rules Linux planet. EXT2 may disappear from general public kernels, just because it is old.
(Linus will still pack it into to compile oneself for custom Linux kernel).
 
Last edited:
A FS where the documentation still warns about using a shipped and activated feature and the status of the bug for a long time just was (or still is??) "we have no idea why it destroys your data, but we still keep it" is a toy project at best, but not a file system that should be even considered being used on live systems.
Also: the fast performance degradation problem (IIRC caused by write holes and fragmentation) is as old as btrfs itself and hit me within a few weeks back when I tested it the first (and last) time (~3-4 years ago IIRC). If this still isn't fixed, btrfs is an even bigger dumpster fire than I imagined...


Wow, this is still not available for btrfs?? What's the point of features like snapshots and built-in data resiliency, if I can't make proper incremental backups preserving these safeguards and have to revert back to dump or tar?

Uhm,, Linux user like me don't worry about document much. Just follow the tutorial on an random blog on the internet or distro wiki, distro forum and everything done. I've never read the document myself, mainly I read only the tuts and examples, many other like me :p So I never know what btrfs document contains :rolleyes:

About send/receive. It is my fault, btrfs can send/receive over network via a pipe with ssh, so does zfs, but it can't resume like zfs, you have to use a wrapper script for that job. Just because I've never use this so I don't know for sure. Btrfs is much full featured now :D
 
Back
Top