df -h giving weird results.

how is this possible?
Code:
/dev/ad11s1                 917G    889G    -19G   102%    /mnt/disk1
/dev/ad12s1                 458G     66G    369G    15%    /mnt/disk2
/dev/ad13s1                 917G    897G    -26G   103%    /mnt/disk3

102%? 103%? -19g? -26g?

what's going on here?
 
The filesystem reserves ~8% for root (df ignores it), which means that a disk showing 100% in use actually has another ~8% of free space (which can only be addressed by root). You (well, the root user) are now consuming part of that additional ~8%.
 
DutchDaemon said:
The filesystem reserves ~8% for root (df ignores it), which means that a disk showing 100% in use actually has another ~8% of free space (which can only be addressesd by root). You (well, the root user) are now consuming part of that additional ~8%.

ok, i'm just making sure nothing is wrong.

heres why. my old media server was linux 2.6.16 with mdadm raid5 and xfs filesystem.

I upgraded to freebsd +zfs but to migrate the data i just copied it all from the mdadm raid5 to single disks and then made the zfs system with 2 raidz vdevs (4 1tb disks each)

when i mounted those disks in freebsd and did df -h i saw that they were showing 102 and 103% i was worried that perhaps the data didn't copy right..
 
The ~8% number is actually for UFS. Some filesystems use slightly more, some may be using slightly less. I don't know what the ZFS margins are exactly, but I guess it's in the 8-10% ballpark too. You should try to stay below 100%, though, because any !root user won't be able to write to these partitions now.
 
DutchDaemon said:
The ~8% number is actually for UFS. Some filesystems use slightly more, some may be using slightly less. I don't know what the ZFS margins are exactly, but I guess it's in the 8-10% ballpark too. You should try to stay below 100%, though, because any !root user won't be able to write to these partitions now.

hold on, i think there is a misunderstanding

the new filesystem is zfs, that isn't close to full.

I didn't have enough money to build an entirely NEW server and transfer the data over to the new ZFS box so what i did was back up the mdadm based server onto single ext2 based drives, 3 of them 1tb each.

I used most of the hardware in that machine to build the new server, in a new case, which holds 20 hot swap drive, right now i have 8 of them in 2 zfs raidz vdevs for a total of 6tb and i've attached the 3 ext2fs based drives to the new server to copy the data onto the zfs filesystem.

It's 2 of the ext2fs based drives showing up as over 100% full.

when i'm done copying the data over i plan to take those drives and make another vdev with 4 drives (to match the other 2 i have)
 
Oh, right. Never mind then. I think ext2fs has ~10% 'root overhead', btw.
 
cool deal, well everything just finished copying...let's just hope it worked, thanks for the answers.
 
Back
Top