zfs great for features but for real world production use on a i/o heavy server I think its bad news.
Plus if you do zfs on root which i initially thought was the right way to go you cant add devices later to the pool as well as not able to use a seperate device for log.
A huge ever growing metadata cache that can be saturated simply by listing large dirs with 100k or so of files inside, the auto tuning of 'vfs.zfs.arc_meta_limit' goes to a very low value, but even when I tuned it to several gigs it was saturated within 24 hours.
Of some of the upsides tho even with server crashes even zfs related crashes (many zfs crashes) I have yet to find any corrupt data. Never needed something like fsck either.
However remember when I tried fbsd 9's default ufs+suj setup and having corrupted database files, a slow but stable filesystem even if really slow is always superior to a filesystem that has inconsistent data. so zfs in my view is a superior choice to the default choice in fbsd9.
Plus if you do zfs on root which i initially thought was the right way to go you cant add devices later to the pool as well as not able to use a seperate device for log.
A huge ever growing metadata cache that can be saturated simply by listing large dirs with 100k or so of files inside, the auto tuning of 'vfs.zfs.arc_meta_limit' goes to a very low value, but even when I tuned it to several gigs it was saturated within 24 hours.
Of some of the upsides tho even with server crashes even zfs related crashes (many zfs crashes) I have yet to find any corrupt data. Never needed something like fsck either.
However remember when I tried fbsd 9's default ufs+suj setup and having corrupted database files, a slow but stable filesystem even if really slow is always superior to a filesystem that has inconsistent data. so zfs in my view is a superior choice to the default choice in fbsd9.