It looks like "df -h" shows wrong full partition size

I asked this question in other thread, posted not by me, but as it was marked as "solved" I have to create separate one.

I have 500GB SATA2 Samsung HD501LJ disk fully allocated for file storage with single slice and single partition created via "sysinstall" (system is installed on other disk).

My problem is that I can't understand why df -h shows 451GiB total partition size whereas there must be 465 GiB? (500,000,000,000/1024/1024/1024)

Here is what tools shows

Fdisk
Code:
Disk name:      ad4                                    FDISK Partition Editor
DISK Geometry:  969021 cyls/16 heads/63 sectors = 976773168 sectors (476940MB)

Offset       Size(GB)        End     Name  PType       Desc  Subtype    Flags

         0          0         62        -     12     unused        0
        63        465  976773167    ad4s1      8    freebsd      165


Disklabel
Code:
Disk: ad4       Partition name: ad4s1   Free: 0 blocks (0MB)

Part      Mount          Size Newfs   Part      Mount          Size Newfs
----      -----          ---- -----   ----      -----          ---- -----
ad4s1d    <none>        465GB *


gpart show
Code:
...

=>       63  976773105  ad4  MBR  (465G)
         63  976773105    1  freebsd  [active]  (465G)

=>        0  976773105  ad4s1  BSD  (465G)
          0  976773105      4  freebsd-ufs  (465G)


df -h /mnt/data
Code:
Filesystem     Size    Used   Avail Capacity  Mounted on
/dev/ad4s1d    451G    4.0k    451G     0%    /mnt/data

Thus can somebody explain to me why does "df" show 451 instead of 465 please?

P.S. I don't think this is du vs ds conundrum as I gathered that information from new just created FS.
 
df -H
Code:
Filesystem     Size    Used   Avail Capacity  Mounted on
/dev/ad0s1a    1.0G    195M    758M    21%    /
devfs          1.0k    1.0k      0B   100%    /dev
/dev/ad0s1e    1.0G     18k    954M     0%    /tmp
/dev/ad0s1f     27G    2.7G     23G    10%    /usr
/dev/ad0s1d    5.7G     44M    5.2G     1%    /var
/dev/ad4s1d    484G    4.1k    445G     0%    /mnt/data

484 instead of 500.

BTW, df reports right size if FAT partition is used instead of UFS2.

gpart show
Code:
 ...
=>       63  976773105  ad4  MBR  (465G)
         63  976773105    1  !6  (465G)

=>        0  976773105  ad4s1  EBR  (465G)
          0  976773105         - free -  (465G)

=>        0  976773105  msdosfs/NO_NAME  EBR  (465G)
          0  976773105                   - free -  (465G)

df -h
Code:
Filesystem     Size    Used   Avail Capacity  Mounted on
...
/dev/ad4s1     465G     96k    465G     0%    /mnt/data

So what's wrong with UFS/df?
 
This definitely seems odd. I had an issue before with a 2TB drive that showed up reporting something like 8GB or something strange like that when I was playing around with partitions.

Perhaps you could try overwriting the MBR:
[CMD=""]dd if=/dev/zero of=/dev/da0 bs=512 count=1[/CMD]

If you've played around with any GPT partitions you might need to wipe the start and the end of the disk:
[CMD=""]dd if=/dev/zero of=/dev/da0 bs=512 count=34[/CMD]
[CMD=""]dd if=/dev/zero of=/dev/da0 bs=512 skip=976773133[/CMD]

I'm not positive if the "skip=976773133" part is the correct number, but the idea is that the GPT layout uses the first and last 34 blocks of the disk (I think). I got the number 976773133, by taking 976773167 (from the output of your fdisk) and subtracting 34 from it.

Then make a new FS and see what happens.
 
Filesystems and partitions are two different things. A partition is just space. A filesystem has overhead for keeping track of which blocks are allocated, which blocks belong to which files, directories, and all that. So it is not surprising that a filesystem inside a partition has less free space than a plain partition.
 
System reserved space?

May be, but not those 8% required for UFS for stability and performance, because 465-8%=427.8.

I've checked the same with another 80GB hdd, and have analogical result.

I did further tests with that 500GB hdd with Linux by formatting it to ext3 and ext4, and here is result

ext3
Code:
Size: 458,5G
Used: 198.0M
Avail: 435.0G
Use: 0%

ext4
Code:
Size: 465.3G
Used: 7.0G
Avail: 435.0G
Use: 2%

Still "2+2=7".

Filesystems and partitions are two different things...

Yes, now I think you are right wblock@.

There is no issue with FreeBSD/UFS and the only way to find out real full useful partition size is to fill it with files.

Thank you for your feedback guys.
 
thorbsd said:
This definitely seems odd. I had an issue before with a 2TB drive that showed up reporting something like 8GB or something strange like that when I was playing around with partitions.

Perhaps you could try overwriting the MBR:
[CMD=""]dd if=/dev/zero of=/dev/da0 bs=512 count=1[/CMD]

If you've played around with any GPT partitions you might need to wipe the start and the end of the disk:
[CMD=""]dd if=/dev/zero of=/dev/da0 bs=512 count=34[/CMD]
[CMD=""]dd if=/dev/zero of=/dev/da0 bs=512 skip=976773133[/CMD]

I'm not positive if the "skip=976773133" part is the correct number, but the idea is that the GPT layout uses the first and last 34 blocks of the disk (I think). I got the number 976773133, by taking 976773167 (from the output of your fdisk) and subtracting 34 from it.

This is almost certainly not a partition problem. But for completeness:

gpart(8) can be used to remove both GPT and MBR partition tables. Back up first!
# gpart -F destroy da0

gpart(8) might refuse to do that if any of the partitions have mounted filesystems. Or not; back up first!
 
Back
Top