Inconsistency in root partition size

O/S ver 6.0

In geom set up 512Mb was allocated for /
running df -h =>
107% for my / slice

Code:
File System   Size     Used     Avail  Capacity
/dev/ad0s1a   496M     489M     -33M    107%

in 1-K Blocks
/dev/ad0s1a 507638   500586    -33566   107%

Note "Used" < Full "Size" => -ve "Avail" ??

Yet when I summed the individual directories and files from a du -h output, they totaled only 150.3 Mb

What could be going on here?
 
It is probably an open file handle. If you have sysutils/lsof you can track it down and restart whatever is writing to the nonexistant file, which should free up the space. If not, a reboot (and perhaps fsck in single user mode) should clear it up (NB this won't stop it from happening again).

Are /var and /tmp their own filesystems?
 
/tmp has its own file system, /var is soft linked to /usr/var

I was able to run lsof, a large list was output but I cannot find a interpretive documents for guidance. Are they all open or only some, how do I differentiate, do I kill all of them if so there are several with
Code:
PID => 1 ... etc; 

COMMAND => 
sysinstall
usbd
sh
lsof 

NAME => 
/
/dev/devicename
/stand/filename
/dist/filename
Thanks!

BTW: Runnning from a Fixit disk
 
Finally got the system to boot 'normally', lsof produces an even longer output file and there is a continuous stream of messages to the console.
Are all the files listed in lsof output open? and how should they be treated. The list is too long to post here, 2082 listed entries.

BTW cron seems to be running with newsyslog set to rotate log files, recent entries at /var/log =>
Code:
-rw-------  1 root  wheel     1158 Sep  5 03:41 auth.log
-rw-------  1 root  wheel     2021 Sep  5 03:01 cron
-rw-------  1 root  wheel       61 Sep  5 01:17 debug.log
-rw-------  1 root  wheel     7807 Sep  5 03:14 dmesg.today
-rw-r--r--  1 root  wheel    28056 Sep  5 03:41 lastlog
-rw-r--r--  1 root  wheel     2046 Sep  5 02:00 log.nmbd
-rw-r--r--  1 root  wheel     2007 Sep  5 02:00 log.smbd
-rw-r--r--  1 root  wheel       61 Sep  5 01:17 lpd-errs
-rw-r-----  1 root  wheel     1994 Sep  5 03:00 maillog
-rw-r--r--  1 root  wheel     2025 Sep  5 03:00 messages
-rw-------  1 root  wheel      205 Sep  5 03:14 mount.today
-rw-------  1 root  wheel        0 Sep  5 03:14 pf.today
-rw-r-----  1 root  network     61 Sep  5 01:17 ppp.log
-rw-------  1 root  wheel       61 Sep  5 01:17 security
-rw-r-----  1 root  wheel      728 Sep  5 10:52 sendmail.st
-rw-r-----  1 root  wheel      728 Sep  5 02:59 sendmail.st.0
-rw-------  1 root  wheel        0 Sep  5 03:14 setuid.yesterday
-rw-r-----  1 root  network     61 Sep  5 01:17 slip.log
-rw-r--r--  1 root  wheel      308 Sep  5 03:41 wtmp
-rw-------  1 root  wheel       61 Sep  5 01:17 xferlog


/: write failed, filesystem is full
 
"It is probably an open file handle", apparently not, several rebooting event should have closed them.
fstat -f => showed nothing


Ran fsck -y repeatedly, in one episodes it was "fsck -y"'d 10 times continuously. Removed some files and reduced / so no message of "filesystem full" ocurrs. But discrepancies in the data of df and du still persists.

Could there be any other possible reason to explain this.
 
Solved

Received this advice from Mel Flynn-2 on Nabble
author="Mel Flynn-2"

This is exactly what I figured. Some files are hiding behind a mount point. The got there most likely, cause you did make installworld without /usr mounted, which would happen if you have the FreeBSD source tree on a different location, reboot into single user mode, only mount the source tree and do
installworld.

To repair, reboot into single user. Run the following commands:
Code:
fsck -y /
mount -u -o rw /
rm -rf /usr/*
exit

This should delete the offending files.
 
Back
Top