Dedicating a 320GB hda to data, what should I know

I've got FreeBSD on a 120GB hda, and I've got another 320GB hda that I want to start storing all of my data (movies, musics, uni stuff etc) on.

I know I could simply format it ufs and mount it in fstab at boot but I want to make sure I'm doing things as safely and reliably as possible as I'd like this data to be somewhat safe.

So is there anything out the ordinary I should do for this sort of set up? What about specific options (dump, pass etc) in /etc/fstab for mounting. ufs or fat32? Which would be mroe likely to survive a cold shutdown (because it seems like ufs get corrupted after ANY cold shutdown...)

Thanks to anyone who can give advice to my vague question, Cheers.
 
no special options in fstab.

ufs or fat32? are you nuts.... UFS of corse..... :)
or you can try out zfs (it's experimental, but stable), on freebsd-8-beta3 ..... (or some freebsd-7.2-stable if i'm correct)
I use zfs and I have never lost any file, and had never had to do any fs maintnence.
 
caesius said:
ufs or fat32? Which would be mroe likely to survive a cold shutdown (because it seems like ufs get corrupted after ANY cold shutdown...)

fat 32 can be easily accessed from any other system, but it's old and unreliable, and you don't want to use it unless you need to access your data from a windows installation on the same computer

ufs2 + softupdates should be bulletproof and should never corrupt after a cold shutdown. you will loose data written in the last 1-2 minutes perhaps, but disc consistency should never be broken (and its fsck runs in the background, which is interesting for a 320 gb disc). also, it's widely tested and developed/bugfixed

killasmurf86 said:
or you can try out zfs (it's experimental, but stable), on freebsd-8-beta3 ..... (or some freebsd-7.2-stable if i'm correct)
I use zfs and I have never lost any file, and had never had to do any fs maintnence.

i agree, zfs is a giant leap forward in the filesystems world, but please, consider that it's been around for just 2-3 years so far, and bad things can still be discovered. also, you would get the most out from zfs (in terms of security and speed) striping the zpool (which is more or less the equivalent of a filesystem in your case) through more discs, enabling mirror or raid configurations
 
Well, for zfs I should warn, that you a lot of rummm...
It's memory hungry.... search on forum and google to find out about zfs more, if you want
 
xzhayon said:
i agree, zfs is a giant leap forward in the filesystems world, but please, consider that it's been around for just 2-3 years so far, and bad things can still be discovered. also, you would get the most out from zfs (in terms of security and speed) striping the zpool (which is more or less the equivalent of a filesystem in your case) through more discs, enabling mirror or raid configurations

I second this. ZFS has great features, but it really is quite young compared to UFS. Sometimes it takes years of testing and debugging before something is considered stable enough. Perhaps 20 years is a bit too long, but waiting 5 years or so might be better.

UFS may be old, but it still works nicely (unlike FAT32). If your data is really important, you might want to use RAID 1 or some other level (definitely not RAID 0 alone though). ZFS also has RAIDZ that is comparable to RAID.
 
caesius said:
it seems like ufs get corrupted after ANY cold shutdown...)
Wrong. A day or so after I had installed FreeBSD on this machine, I had to cold boot it because of an "automount" application screw up that did some funny stuff with the CD drive. A few days later, Xorg was freezing the machine because of hardware (VGA) problems (it was so bad I even had trouble when powering the machine on again).
In all these cases, the automatic background fsck checked the file system and solved any problems caused by the dirty unmounts. Nothing was corrupted/damaged/lost.
 
Forgot to mention something. If you're going to use ZFS, it's best to use it on a whole disc instead of partitions or slices. The performance has a big difference.
 
dennylin93 said:
Forgot to mention something. If you're going to use ZFS, it's best to use it on a whole disc instead of partitions or slices. The performance has a big difference.

Hmmm I boot from flash because I use geli.... (entire disk encryption), I wonder if there was difference, If I'd use zfs without geli
 
Thanks for replies

xzhayon said:
ufs2 + softupdates should be bulletproof and should never corrupt after a cold shutdown. you will loose data written in the last 1-2 minutes perhaps, but disc consistency should never be broken (and its fsck runs in the background, which is interesting for a 320 gb disc). also, it's widely tested and developed/bugfixed

UFS2? So is this different that the type 165 I formatted the hda with? How do I use UFS2?
 
The default version of UFS on FreeBSD switched to UFS2 around the time of FreeBSD 6.0. Any new install since then uses UFS2 automatically.

See the newfs(8) man page for details on the defaults, and how to change them when formatting partitions.
 
caesius said:
Thanks for replies



UFS2? So is this different that the type 165 I formatted the hda with? How do I use UFS2?

165 is the FreeBSD slice (linux/windows users read: partition) identification (/dev/da0s1 &cet), the partitions (I don't know if the linux/windows world has come up with a good, single term for what we call a partition) within are what actually contain the filesystem(s) (/dev/da0s1a, da0s1b, &cet) which you create as UFS1 or 2 via newfs(8).

Further, for a secondary data drive containing one filesystem which will never be used by another operating system, there isn't really any need to fdisk(8) it, just # bsdlabel -w /dev/da1 && newfs -U /dev/da1a (you can use # bsdlabel -e /dev/da1 to put swap on there if you wish).

By the way, zfs is pretty well tested in production, so don't let the naysayers get you down about it being a bit new.
 
Back
Top