Correct partitioning for 3 TB backup disks (FreeBSD 9.1)

Hello,

We've a few backup servers in same configuration. Each server contains 1 disk for OS and multiple disks for backup. The servers store millions of files which are from 5 MB to a couple of gigs in size.

Here's [CMD=""]gpart list[/CMD] result of a drive:
Code:
Geom name: ada1
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 5860533134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada1p1
   Mediasize: 3000034656256 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   rawuuid: ***
   rawtype: ***
   label: (null)
   length: 3000034656256
   offset: 20480
   type: freebsd-ufs
   index: 1
   end: 5859442727
   start: 40
Consumers:
1. Name: ada1
   Mediasize: 3000592982016 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e3

So, the partition should be 2.7 TiB for this 3 TB disks.

When I look for result of [CMD=""]df -h[/CMD]
Code:
Size    Used   Avail Capacity
2.7T     32M    2.4T     0%

This was always like this for all my FreeBSD servers but I never interested in this matter. The HDD capacity is 2.7 TiB, used is 32 MiB but available capacity is 2.4 TiB. Why is this difference?

About %8 of storage is lost for me. Probably I'm missing something and it should be my ignorance. What is the truth of this matter?

If there's really a lost capacity, how can I solve it? How can I partition my disks correctly for my scenario of both performance and storage capacity?

Please share your suggestions or refer me to a document to understand this issue.

Thanks.
 
1000 byte kilobytes vs. 1024 byte kilobytes. You were sold a 3TB disks but the capacity was calculated with 1000 byte kilobytes. The FreeBSD tools use the more accurate 1024 byte kilobytes as the basis of the calculation.
 
I think (media size = 3000592982016) equals to 2.7 TiB. I'm not talking about 3 TB.

2.7 TiB size shows 2.4 TiB available although there's nothing used.
 
wblock@ said:
8% of space in a UFS filesystem is reserved due to MINFREE. See newfs(8).

Thanks for the correct answer :) Now, I read tunefs(8) and I learned that MINFREE is needed for performance.

I don't know anything about ZFS. Does it work same as UFS? Is there a way to use the overall capacity without performance decrease? I'm ready to use the systems with the current position but I can be happy if there's a solution for this without performance decrease.
 
ZFS performance tails off dramatically as a pool gets near to full.

ZFS Best Practices recommend not using over 80% of a pool.

Any filesystem's write performance is likely to drop as it gets full, as files need to be chopped up into fragments and spread over the disk to fit in the holes left here and there.
 
Back
Top