Solved Lost/Missing Space on RAID 1 Array

I'm hoping somebody can answer a question for me. I bought two 2 TB hard drives that I've installed into my FreeBSD (v10.1) server. I've configured them as a RAID 1 software array, and formatted them to UFS.

FreeBSD reports that the RAID only has 1.6 TB available for use. While I realize that some space is going to be lost due to system overhead (file system, RAID, etc), losing almost a quarter of the drive's capacity seems a bit much.

The question I have for anybody that can answer it is, is this normal? I've listed some command outputs below. The RAID was just created about 30 minutes ago, so it's still building.

Thank you to anybody that can answer this for me.

df -h:
Code:
Filesystem         Size    Used   Avail Capacity  Mounted on
/dev/mirror/gm0    1.8T    8.0K    1.6T     0%    /mnt/stuff

gmirror status:
Code:
      Name    Status  Components
mirror/gm0  DEGRADED  ada6p1 (ACTIVE)
                      ada8p1 (SYNCHRONIZING, 7%)
 
There are probably two factors:
  1. HDD vendors measure capacity in TB (powers of 10), while the OS measures in TiB (powers of 2). That is where 1.8T is from.
  2. 1.6 TiB is the result of space reservation for root usage by the UFS filesystem. It can be tuned by tunefs.
 
Okay, I think I solved my own problem. I didn't know that UFS reserved some space for itself (by default 8% of the drive). I did some reading on the newfs command and instead had it reserve 2% of the drive. I now have most of the drive available.

Since the drives are just going to be serving video files with very little write activity, I'm just going to leave it at 2% reservation.

Thank you wblock@ and mav@ for your help.
 
HDD vendors measure capacity in TB (powers of 10), while the OS measures in TiB (powers of 2). That is where 1.8T is from.

These days it is almost impossible to find an HDD with properly reported size. It pisses me off so badly. How can you even be in that line of business and think in terms of base 10.
 
Back
Top