Empty UFS drive Shows 577GB Used in Win7

Hello all,

I've got 4 x WD2002FAEX 2T drives setup in RAID0 using Rocket 640L RAID controller.
I've newfs the array, set soft updates and when I do 'du -h' it showed something like 400k used (the .snap file) but when I mount it via SAMBA on my Windows 7 box, it shows 577GB used. Can anyone explain this to me? Am I missing something?

I'm running FreeBSD 8.2-RELEASE amd64
SAMBA Version 3.5.6
 
Could it be a problem with the protocol? Try changing the protocol option to LANMAN2, even if NT1 should be better....
 
Can you show the full df -h output and what Windows thinks the used/avail space is.

Also, I'm a bit concerned about a RAID0 array of 4 x 2TB disks. What specific reason did you have for creating such a risky array? I assume the data is either temporary and can be re-created, fully backed up somehow, or you just don't care if you lose it next week.
 
usdmatt said:
Can you show the full df -h output and what Windows thinks the used/avail space is.

Also, I'm a bit concerned about a RAID0 array of 4 x 2TB disks. What specific reason did you have for creating such a risky array? I assume the data is either temporary and can be re-created, fully backed up somehow, or you just don't care if you lose it next week.

I can't show output from 'df -h' as I've already started putting all of my data back on the drive - which answers another one of your queries - everything's backed up to the hilt on several external 3T drives. I don't have a huge amount of data but I do enjoy the extra space and although I know RAID0 can be risky, I'm taking precautions.

'du -h' was showing everything correctly with only 400k used. I've attached a gif showing the empty mounted share as it appeared on Windows.


The whole reason why I'm going through this now is I just lost a drive and had to replace it. As good as the Caviar Blacks are, they're not infallible.
 

Attachments

  • Empty Drive.gif
    Empty Drive.gif
    5 KB · Views: 201
Steve_Laurie said:
The whole reason why I'm going through this now is I just lost a drive and had to replace it. As good as the Caviar Blacks are, they're not infallible.

Exactly my point really. These modern big drives drop like flies and surely it would be better to have slightly less space and some redundancy rather than go through restoring everything every time a disk fails. A failed drive would just mean a 5 minute shutdown to put a new disk in. Judging by the services you mention I can't see performance being a concern (In fact you may be better off with cheaper WD-RED NAS disks if this box is on 24x7 instead of Blacks).

The only reason I asked to see the full df is that as Samba doesn't share an actual filesystem like NFS, but individual folders, I don't trust the way it works out the space. It should simply show the space of the filesystem that the shared folder is on but I'm pretty sure I've seen seemingly random figures before as well. I was wondering if it was possible to match the figures Windows is seeing with other disk usage on the server (or some other clue like there being a specific ratio between the reported use and overall size).
 
Steve_Laurie said:
Thanks. I know. I will. Just waiting for 9.1-RELEASE to arrive. Just such a huge job re-setting up SAMBA, DNS, NFS, DHCP etc. servers (don't like upgrading).

I'm not sure why you would have to do that... just upgrade FreeBSD base (and rebuild all your ports because the 8.2 -> 9.1 causes shared library version bumps). My upgrade path is basically # freebsd-update upgrade -r ?.?-RELEASE
, merge configuration files,# freebsd-update install,update ZFS boot code in case it changed, reboot, # freebsd-update install, and then rebuild all ports and a final # freebsd-update install in case it instructs me to.
 
You could also update to 8.3. No rebuilding of ports would be needed. And it's going to be supported for a while.
 
Steve_Laurie said:
The whole reason why I'm going through this now is I just lost a drive and had to replace it. As good as the Caviar Blacks are, they're not infallible.

The 2TB Black drives don't have a very good reputation for reliability.
 
wblock@ said:
The 2TB Black drives don't have a very good reputation for reliability.

They're also not a supported drive for running in RAID, due to lack of TLER support.

I had a pair of 512 GB blacks running in RAID1 before I discovered this, and would get frequent rebuilds due to this fact. Replaced with another pair of new (at the time) 1 TB blacks (due to thinking that one or both had failed), same issue. Discovered lack of RAID support on the internet. Not happy.

Running 4 of them in RAID0?

Good luck!

I wouldn't use a hardware RAID controller, I'd set them up using ZFS instead (which doesn't have a problem with the lack of TLER support). In fact this is what I've done...
 
Back
Top