'write failed, filesystem is full' but df says 1.7T remains

Hi Guys,

I found that my server had rebooted yesterday and I have not yet figured out why. Unfortunately my RAID array is now not letting me write to it, despite saying that there are no errors in the RAID manager, df shows 1.7T remains :(

I have rebooted it with no change.

When I run fsck_ffs I get this result:

Code:
# fsck_ffs /dev/da2a
** /dev/da2a (NO WRITE)

CANNOT READ BLK: 5850780960
CONTINUE? [yn] y

THE FOLLOWING DISK SECTORS COULD NOT BE READ: 5850780960, 5850780961, 5850780962, 5850780963,
/dev/da2a: INCOMPLETE LABEL: type 4.2BSD fsize 0, frag 0, cpg 0, size 1556086784

I can still read and mount it so I can back it up and rebuild but I would rather not if I can help it. Do I have any options?

Also, I should say that my footer relates to a different build, I am running 7.1. This is the output of df:

Code:
/dev/da0s1a    496M    273M    184M    60%    /
devfs          1.0K    1.0K      0B   100%    /dev
/dev/da0s1e    496M    1.0M    455M     0%    /tmp
/dev/da0s1f    9.5G    2.9G    5.8G    34%    /usr
/dev/da0s1d    4.8G    1.6G    2.8G    37%    /var
/dev/da2a      2.6T    719G    1.7T    29%    /media

When I run dmesg all I get is this:

Code:
g_vfs_done():da2a[READ(offset=2993866080256, length=16384)]error = 5
g_vfs_done():da2a[READ(offset=2994058723328, length=16384)]error = 5
g_vfs_done():da2a[READ(offset=2994251366400, length=16384)]error = 5
g_vfs_done():da2a[READ(offset=2994444009472, length=16384)]error = 5
g_vfs_done():da2a[READ(offset=2994636652544, length=16384)]error = 5
g_vfs_done():da2a[READ(offset=2994829295616, length=16384)]error = 5
g_vfs_done():da2a[READ(offset=2995021938688, length=16384)]error = 5
g_vfs_done():da2a[READ(offset=2995214581760, length=16384)]error = 5
g_vfs_done():da2a[READ(offset=2995407224832, length=16384)]error = 5
g_vfs_done():da2a[READ(offset=2995599867904, length=16384)]error = 5
pid 1134 (cp), uid 0 inumber 14 on /media: filesystem full

NB, The first error occurs hundreds of times.
 
You have read errors and to fix that, first, I would unmount the disk and fsck it (you will get WRITE enabled). Afterwards, I would check the disk with smartmontools (you can find it in the ports).

You said you have a RAID but you didn't mention which kind (RAID0,1, etc) or how it was setup (ZFS, gmirror, etc).

Seems to me that one of your disks is failing (this would be ok for a mirror RAID but not that ok for a stripe raid (unless 5, 6 or some raidz setup).
 
Thanks da1,

Sorry, it is a hardware-based RAID-50, using Highpoint RocketRAID 2320, the RAID manager (using SMART) says the status is normal on all disks, physical and virtual.

I'll checkout the tools you mentioned and report back shortly.

Thanks for your time :)
 
Hello again,

fsck gives me the same result as fsck_ffs.

I don't have time to look at smartmontools today, but when I get home I'll do some research.

Thanks again.
 
smartmontools do not support Highpoint RocketRAID cards, but there are native FreeBSD tools and drivers for the card that I have been using. The management tools support SMART which reports every property is OK, even though one disk has 552 bad sectors, is this particularly bad?

If I don't have any better options by Thursday I might as well rebuild it, but I really would like to know what it is, both out of curiosity and future knowledge.
 
Back
Top