UFS How do I access space freed by changing my UFS reserved space from 8% to 0%?

I've run 'tunefs -m 0' and 'tunefs -o space' on my UFS hard drive, however I haven't seen any reclaimed space. The changes wrote successfully.

It's a 4TB drive so I expect 3.725 TB but I'm getting 3.52 TB.

Any thoughts?
 
Generally speaking, this is probably a really bad idea. Decades have been spent fine-tuning the default reserves and dynamic optimization for UFS. Changing them to squeeze out a bit more space on an already large drive is like tearing the roof off your garage to fit in a few more boxes of junk.Delete files or get a bigger drive. You're talking about roughly 200 gigabytes and a one-disk system, so it seems unlikely you're actually going to need that space for anything before you can buy a larger drive or a backup drive.
 
Generally speaking, this is probably a really bad idea. Decades have been spent fine-tuning the default reserves and dynamic optimization for UFS. Changing them to squeeze out a bit more space on an already large drive is like tearing the roof off your garage to fit in a few more boxes of junk.Delete files or get a bigger drive. You're talking about roughly 200 gigabytes and a one-disk system, so it seems unlikely you're actually going to need that space for anything before you can buy a larger drive or a backup drive.

It's a 2x4TB hardware RAID-1 array I'm using to host my multimedia (no not porn). 1MB\s reads would be sufficient for the media I stream. I can't buy more hard drives due to space constraints and I can't get larger hard drives due to cost constraints.

NTFS\ext etc all seem to be able to give me my 3.7TB, how come UFS needs 200GB reserved to twirl it's thumbs? 8% is a non-trivial fraction of space. I could understand a 10GB reserve but 200GB seems insane.

Regardless, do you know how I can access the freed 200GB of space? gdisk shows 3.7TB but the mounted partition shows 3.5
 
Based on a 4,000,000,000,000 bit disk I get about 3.63TB of usable space. I don't know if there's any other overhead in UFS but you're only a few % off. Personally I would not want to fill any drive within a few percent of completely full.

Of course I'm not sure how "optimised" or "fine-tuned" the defaults are. At the 8% limit users other than root cannot access the space, which as far as I'm aware is just to stop users from filling the hard drive, allowing some lee-way so that services don't crash and logs still work until root can free some space. Saving 8% 20 years ago was probably reasonable, saving a few hundred GB just to make sure users can't fill the disk is a bit excessive.
 
I was able to fit more of multimedia data on ZFS compared to ext4 which in turn was better than UFS2. Same discs, all default settings except for ZFS I used compression=lz4.
 
I did consider mentioning ZFS + LZ4 but I wasn't sure how well multimedia data would compress. I'd much rather just use 2 basic disks in a ZFS mirror than UFS on a hardware mirror (especially if it isn't a 'proper' raid controller)
 
Back
Top