UFS Is FreeBSD UFS not good?

G

giahung1997

Guest
I'm not dare to use the word "sucks", afraid I will be attacked. Today I reinstalled MXLinux17 alongside FreeBSD and I found it's still supports install on EXT2 despite the default ones is EXT4. The installation process straighforwart as normal and now I'm running MX on EXT2, posting this thread. My knowledge is limited, I say no the bendmark results because I don't understand these value, but in real life daily desktop usage EXT2 on MX only slower than EXT4 about 30%, it's too fast compared to UFS on FreeBSD. For example, when extract heavy archive like openjdk8-docs on MX and llvm40 on FreeBSD, EXT2 is much better. I think UFS should be comparable to EXT4, but it's not :(

Same old dying HDD I keep to test how long it could live. I don't think EXT2 is better than UFS but the caching mechanic on Linux is much better than FreeBSD, forgive me if I'm wrong. I see via Conky how it use RAM and release it immediately after the copy is done just too impressive. The caching made a slow USB2.0 pendrive formated with FAT32 just too fast compared on Windows when copying many small files (different device, not the HDD I'm refering to the previous paragraph).
 
I'm not dare to use the word "sucks", afraid I will be attacked.

giahung1997, that would not be the best choice of words to describe how you feel about it, but no one is going to attack you.

I use UFS on my all 5 of my FreeBSD machines and am satisfied with how it performs. I can do a hard reboot and it not hose my filesystem, and I've never had corrupted or lost files.
 
Haven't used ext2 in a long time, but I would have thought it would actually be faster as I don't think it has journaling.

Like Trihexagonal I've had hard reboots on UFS2 without issue. I use ZFS on towers (and servers) running FreeBSD, but on a couple of multibooting laptops, use UFS. The only benchmarks I have are ffmpeg re-encoding, and ext4 was slightly faster, but it came to only a few seconds on files of over 1GB.

So, to answer the question, my answer would be, No, that's not the case, it's fine. I don't know how well any system will work though, on a dying drive, and you mention that's what you're using .
 
I don't think EXT2 is better than UFS but the caching mechanic on Linux is much better than FreeBSD, forgive me if I'm wrong.
Just because it's FreeBSD doesn't mean it has to be best at everything. And if something doesn't work well for you then I can very well imagine that this would plain out suck. No offense taken.

But a few things, because this takes me back to the early Solaris days which, as you might know, also used UFS. It was soon dubbed "Slowaris" because the filesystem allegedly didn't have journaling and was slow. Although their observations were true, it also overlooked one very important aspect: this wasn't a "ready out of the box" kind of environment, system administrators were expected to properly tune the filesystem in order to get the most out of it (though journaling was soon enabled by default on Solaris).

I think this could be a similar story, and it's why tunefs(8) exists. But as others have also mentioned there are more concerns with a filesystem than speed alone, reliability and data integrity should always outweigh any speed aspects. Which is in my opinion exactly the case for UFS.

Always keep in mind that getting the best tool for the job is important, and that tool doesn't always have to be FreeBSD.

So I'd say if speed is really that important then use Linux together with EXT2. If you insist on FreeBSD and UFS you might have some good results when you actually tune the filesystem before usage.
 
giahung1997, that would not be the best choice of words to describe how you feel about it, but no one is going to attack you.

I use UFS on my all 5 of my FreeBSD machines and am satisfied with how it performs. I can do a hard reboot and it not hose my filesystem, and I've never had corrupted or lost files.
I'm really bad at English :( BTW, I didn't say UFS lost my files, it's all about performance :)
 
Haven't used ext2 in a long time, but I would have thought it would actually be faster as I don' think it has journaling.

Like Trihexagonal I've had hard reboots on UFS2 without issue. I use ZFS on towers (and servers) running FreeBSD, but on a couple of multibooting laptops, use UFS. The only benchmarks I have are ffmpeg re-encoding, and ext4 was slightly faster, but it came to only a few seconds on files of over 1GB.

So, to answer the question, my answer would be, No, that's not the case, it's fine. I don't know how well any system will work though, on a dying drive, and you mention that's what you're using .
I don't know this. As I know EXT4 has journaling. EXT4 is the fatest, after that XFS, btrfs sucks.
Perhaps because it don't write journal so caused less writing to the failing drive so it's faster than normal, he he :D
 
Just because it's FreeBSD doesn't mean it has to be best at everything. And if something doesn't work well for you then I can very well imagine that this would plain out suck. No offense taken.

But a few things, because this takes me back to the early Solaris days which, as you might know, also used UFS. It was soon dubbed "Slowaris" because the filesystem allegedly didn't have journaling and was slow. Although their observations were true, it also overlooked one very important aspect: this wasn't a "ready out of the box" kind of environment, system administrators were expected to properly tune the filesystem in order to get the most out of it (though journaling was soon enabled by default on Solaris).

I think this could be a similar story, and it's why tunefs(8) exists. But as others have also mentioned there are more concerns with a filesystem than speed alone, reliability and data integrity should always outweigh any speed aspects. Which is in my opinion exactly the case for UFS.

Always keep in mind that getting the best tool for the job is important, and that tool doesn't always have to be FreeBSD.

So I'd say if speed is really that important then use Linux together with EXT2. If you insist on FreeBSD and UFS you might have some good results when you actually tune the filesystem before usage.
You misunderstanding me badly :( I said as I've read on this forum UFS is comparable to EXT4 but in reality when I try it's even slower than EXT2. And as my own experience I think it's not because EXT2 better designed than UFS but the way Linux kernel caching in RAM, for example a FAT32-formated USB2.0 pendrive still performes faster on Linux than it's native OS when it's created/formated. I've never said anything FreeBSD's is superior, never.
 
Try adding the noatime flag to the mount point in /etc/fstab. mount(8) has the details.

In essence every time you read a file or it's metadata in FreeBSD, atime (access time) is updated. This is not needed for some users and Linux turns it off by default in most distros.

You could also try async although I wouldn't recommend it unless your environment is pretty solid. This has the potential to lose data on a power loss.
 
Is there a way to set noatime globally?

I'm using UFS on everything and it does seem a little sluggish with my SSDs. Not blazing any trails with my usb flash drives either.
 
One thing to watch out for is how filesystem caching happens on different OSes.

For example, a lot of Linux distros use write-back caching. This means, a file is written to a cache first (whether it's in RAM or the buffer on the disk). Once that completes, the OS is notified that the write is complete. Later, in the background, the file is flushed from cache and actually written to the disk. Ext2 is notorious for this, which makes it appear "very fast" for writing files.

FreeBSD is more conservative and tends to use write-through caching. Files are written simultaneously to the cache and the physical disk. But the OS is only alerted to a completed write once the file is on the disk. (Soft-updates changes things a bit with ordering, and performance enhancements.)

It's a trade-off between "fast writes" and "data safety". Which is more important depends on the person and the data.

Write a large file to a slow disk on Linux. When you get the "copy complete" message, power off the system (as in, hold the power button in for 5 seconds). On boot up, see if you can find the file you just copied. :)

Try the same with different filesystems on Linux.

Then do the same on FreeBSD.

Then decide which you think is "better". :D
 
One thing to watch out for is how filesystem caching happens on different OSes.

For example, a lot of Linux distros use write-back caching. This means, a file is written to a cache first (whether it's in RAM or the buffer on the disk). Once that completes, the OS is notified that the write is complete. Later, in the background, the file is flushed from cache and actually written to the disk. Ext2 is notorious for this, which makes it appear "very fast" for writing files.

FreeBSD is more conservative and tends to use write-through caching. Files are written simultaneously to the cache and the physical disk. But the OS is only alerted to a completed write once the file is on the disk. (Soft-updates changes things a bit with ordering, and performance enhancements.)

It's a trade-off between "fast writes" and "data safety". Which is more important depends on the person and the data.

Write a large file to a slow disk on Linux. When you get the "copy complete" message, power off the system (as in, hold the power button in for 5 seconds). On boot up, see if you can find the file you just copied. :)

Try the same with different filesystems on Linux.

Then do the same on FreeBSD.

Then decide which you think is "better". :D
Thanks. Your answer satisfy me. I tried and on my MXLinux17, the file (.mp4) not corrupted but recorded not full length, it's 40min, the file now just 5min and some open error, still readable. I'd choose the FreeBSD way :)
 
No idea about what others are doing but here it's UFS all the way and has been for some years. Seems impossible to corrupt the file system (touch wood) and both FreeBSD boxen run fast with blink-of-the-eye copying of large files. I wouldn't be able to tell if something was faster, but I doubt anything is more stable. I've done some pretty stupid things over the years, but never lost files because of it.
 
Not to mention the fact that UFS2 filesystems can be mounted syncronously without sensible (if any at all) loss of writing performance. Try this with EXT3 (IIRC EXT4 filesystems can't be mounted syncronously (anymore?)); I experienced write speeds about an order of magnitude slower (from ~1MB/s to ~100KB/s) downloading files. :eek:
 
FreeBSD is more conservative and tends to use write-through caching.

Okay that makes sense why it seems Linux is faster. FreeBSD is just holding off a bit to make sure everything is written to disk. That's actually fine with me, I'll trade a little speed for that.
 
One thing to watch out for is how filesystem caching happens on different OSes.

For example, a lot of Linux distros use write-back caching. This means, a file is written to a cache first (whether it's in RAM or the buffer on the disk). Once that completes, the OS is notified that the write is complete. Later, in the background, the file is flushed from cache and actually written to the disk. Ext2 is notorious for this, which makes it appear "very fast" for writing files.

FreeBSD is more conservative and tends to use write-through caching. Files are written simultaneously to the cache and the physical disk. But the OS is only alerted to a completed write once the file is on the disk. (Soft-updates changes things a bit with ordering, and performance enhancements.)

It's a trade-off between "fast writes" and "data safety". Which is more important depends on the person and the data.

Write a large file to a slow disk on Linux. When you get the "copy complete" message, power off the system (as in, hold the power button in for 5 seconds). On boot up, see if you can find the file you just copied. :)

Try the same with different filesystems on Linux.

Then do the same on FreeBSD.

Then decide which you think is "better". :D

I just had this happen to me last night! I wrote a 10GB file to a flash drive and Linux said it was "done" (XFS to NTFS)...I went to reboot into Windows on my dual boot machine and saw reboot was hanging so I thought "whatever" and rebooted. Went to pull the 10GB of data off my flash drive..all dead. Today I wrote the same data to the same flash drive and found it took 1min 15sec to Actually finish the write by watching the activity light on the flash drive. Needless to say, that's yet another strike against Linux for me
 
Ehm, no. That's not a fair comment because the same could easily have happened with any other OS, including FreeBSD.

Which part? Pulling my flash drive out while it's still writing? Yes, it wouldn't be a fair comment then. But in the case of when the OS notifies of when a write is complete, it's completely fair game for critique. My operation was to write data to a disk and tell me it is completed when its done, not to cache. With write-back caching it'll tell me when the write to cache is complete and continue to write to disk. Although good in some cases, it's not that great for data integrity since the data is written to cache, not to permanent storage. This may be useful in some cases like if you have write-intensive jobs (and which I believe is also why ZIL is a thing), but not my usecase. This is also not good from a desktop standpoint as there will be many times that you will be waiting for a write to disk to complete so you can quickly pull out the storage media and use it elsewhere. At that rate, it's very deceptive. Especially if there is no IO indicator light on the media.

stratacast1
Maybe you'll like the Linux sysctl variable dirty_bytes, which seems to control the writeback. Setting that low enough seems to make it practically writethrough.

I'll take a look at that!
 
But in the case of when the OS notifies of when a write is complete, it's completely fair game for critique.
No it is not fair game for critique. For about 30 or 40 years, the definition of Unix write(2) and close(2) system calls (for all POSIX-style operating systems) has been that when both complete, the file may very well still be in RAM cache. And that directly reflects to programs such as cp that use write. After the program ends, there is no guarantee that the file is actually on disk. If you need that, you need to make the fsync(2) call, use use the fsync program from the command line (for a file at a time), or use the sync program from the command line, or unmount the file system.

Anyone who pulls a writable media (such as a USB stick) which is still mounted out runs the risk that writes will not be completed, unless they have taken other measures as shown above. What's even worse: Many file systems will actually not even being writing after a certain time has passed, so even if the USB stick has a "busy" indicator light, it might not blink.

Now a different set argument that can be made is the following: (a) File systems should not cache write data as aggressively, and not allow very much dirty data to remain in RAM. (b) File systems should begin writing sooner, or perhaps even immediately, to that a busy indicator can be used to show that one should not pull the disk. Today file systems already differ significantly, not only in how much or how aggressively they cache dirty (unwritten) data, but also in what techniques they use to make sure that dirty data gets to the disk in a sensibly order, which is a compromise between performance, getting the data to disk sooner, and doing so in a fashion that minimizes the risk for corruption when an unclean shutdown happens. However, beware of the side-effects: of simply demanding less write caching: It would make file systems slower; for many important server workloads, significantly slower. If someone proposes such a change, it might make the OS unsuitable for high-performance or server use.

An interesting idea: If the file system knows that the storage device is of a kind that is often removable (like USB stick or SD card), or if it knows that this machine is used as a desktop and has a human nearby (who is likely to screw up a disk), then write to disk sooner, and performance be damned.
 
system calls (for all POSIX-style operating systems) has been that when both complete

I'm pretty sure this is what I'm arguing for, and if I haven't made that clear, my bad. Are you referring to write and close to just cache?

there is no guarantee that the file is actually on disk

Correct me if I'm wrong, but isn't exactly this contradictory to what you just said about write(2) and close(2)? You just said when BOTH complete (again, are you referring to write and close finishing on cache or cache and disk?), and then here say there is no guarantee that the file is actually on disk.

I understand the differences in workloads and not compromising server loads....that's why there's generally "desktop" and "server" images. And maybe it doesn't matter if the writing to disk were to change or not, this could very well be a UI thing. Because if I'm understanding the POSIX standard, a file write operation is completed when data is both written to cache AND disk (in this case, the actual final destination). I don't think I know of any desktop user in existence that wants the GUI saying "write to disk complete" when it clearly isn't. That's just backwards and confusing. I'm curious to write data to this external drive now using FreeBSD and seeing how it behaves, as this is the current behavior shown on Linux
 
No, the definition of both write(2) and close(2) is: They can return complete if the file is in cache. There is no need for the file to be on disk. If a GUI program tells the user "write to disk complete" just because write(2) and/or close(2) have completed, that GUI program is flat out wrong. Which doesn't surprise me: some people who write GUIs have a tendency towards being clueless about how the underlying systems really work.

(Side remark: In reality, many programs don't call write() directly, but through user-space wrappers, the most common of which is fwrite(3), observe that it is in section 3 of the man pages. But everything that applies to write(2) also applies to the wrappers.)

Now, it's easy to overcome that: a program can call fsync(2) on the open file descriptor, or it can open the file in some sync or direct mode, or it can run the fsync(1) executable, or a handful of other things. However, one has to be very careful doing that: Uncritical use of synchronous writes can easily kill performance, even for desktop usage (there is a famous story of some web browser being considered slow on some platform, only because that browser by mistake was writing its local cache files synchronously). Furthermore, on flash-based systems (for example SSDs, which are now common in laptops), the uncritical use of synchronous writes can cause write amplification, which can cause premature wear out of the flash chips.

How bad the penalty for synchronous writes or sync operations is depends on how aggressively the OS caches, and it depends on the workload of the system. Which means that the intuition that a developer has based on running OS "X" on his machine may not be appropriate for other OSes and other platforms. There is much more to high-quality development than writing code that works once and then ship it.

The really correct answer is this: When deciding whether to write with sync or not, one has to look at the environment the program runs in. If the program has had a side effect that has been recorded by another entity, then if the file is only in cache and not on disk, and the cached copy doesn't make it to disk because of a system crash, then the other entity might later find an inconsistency between its record, and the file having reverted to the old content. As a concrete example: if the human remembers that the GUI program said "write to disk complete", and then the system crashes, then the human will later think that the program lied (because it did, the file was not on disk). This is a very simple and superficial example; more interesting versions happen for example in network protocols: a second computer might have received a message from the first computer, which implies that the first computer really made the file be durable; if the first computer now crashes, but the second computer has managed to harden its observation, then the two pieces of information contradict each other.
 
Thank you for all the thorough information. However, I still am left with a bit of confusion here from hands on testing. Here's my test:

- Plug in USB flash drive formatted to FAT32, then use cp to copy a FreeBSD 11.1 ISO from the host to the USB flash drive
- Watch the activity light on the drive and for completion of cp

Results with Linux:
- cp starts, no drive activity (assuming here the image is being written to cache). Then there's drive activity, soon after, cp closes, but there is still evidently IO operations on the flash drive. Soon after, IO operations complete on the flash drive.

Results with FreeBSD:
- cp starts, there is drive activity immediately. Both drive activity light and cp close at the same time.

So my thoughts here then are that the way Linux appears to handle it (getting back to phoenix's comment on write-back vs. write-through caching) than FreeBSD. To my understanding, Linux will do a write and close faster than FreeBSD because it does write-back caching, so once written to cache, the close operation happens, but the data still may not have reached disk at this point. Then there's write-through which FreeBSD uses which will write to cache and through to the destination before closing. Right?
 
The delay you describe which happens before physical writing begins is because there is also some threshold before cache gets emptied.
This makes sense because if you write text output slowly and start writing it too soon, you might have more disk activity than when waiting until there is actually some good payload to write. Think of writing 10 blocks with seeking inbetween or 1 contiguous block.

The former could slow down overall performance more than the latter.
And more caching makes ordered flushing easier, too, even more increasing disk performance.

And even Windows has an option in its file manager "Safely remove..."
 
Back
Top