EDIT: My concerns over drive speed were misplaced; the drive was performing at maximum performance. My confusion was due to impressive speed gains when using an 11-disk zpool of the same drives. @wblock@ provides nice guidance for formatting.
I would like to verify that I am partitioning a basic UFS file system correctly. I have configured a solitary drive as:
I was underwhelmed by the resulting IO performance, though:
Read=130 Write=120[size=-2]±3[/size] Rewrite=43[size=-2]±1[/size] [size=-2](MB/sec)[/size] Latency: 610,1200,8900 [size=-2](ms)[/size]
The drive is a 3 TB SATA 3.0 drive, which is connected to an HBA with 11 other drives in a RAID-Z3 pool. That pool gets significantly (3-4 fold) better performance (more details):
Read=670[size=-2]±77[/size] Write=330 Rewrite=230[size=-2]±14[/size] [size=-2](MB/sec)[/size] Latency: 260,780,2100 [size=-2](ms)[/size]
I expected the drive to perform similarly to its comrades in the pool, perhaps better. The drives are all "fake 512 byte sector" WD drives, and I have correctly configured the pool (ashift=12). Have I failed to do so for the UFS device? Or missed a configuration step? Or does a large ZFS pool make such good use of parallel IO that it's just that much more efficient than an isolated drive?
I tried to follow guidance using
... but ended up with a provider I could not mount (mount: /dev/da10s1: Invalid argument).
Drive characteristics:
I would like to verify that I am partitioning a basic UFS file system correctly. I have configured a solitary drive as:
Code:
gpart destroy -F da10
gpart create -s GPT da10
gpart add -t freebsd-ufs da10
newfs -S 4096 -b 32768 -f 4096 -O 2 -U -m 8 -o space -L ufs4kb /dev/da10p1
bonnie++
[size=-2]v1.96[/size] UFS space optimized 80G N=3Read=130 Write=120[size=-2]±3[/size] Rewrite=43[size=-2]±1[/size] [size=-2](MB/sec)[/size] Latency: 610,1200,8900 [size=-2](ms)[/size]
The drive is a 3 TB SATA 3.0 drive, which is connected to an HBA with 11 other drives in a RAID-Z3 pool. That pool gets significantly (3-4 fold) better performance (more details):
bonnie++
[size=-2]v1.96[/size] RAID-Z3 11 drives 100G N=6Read=670[size=-2]±77[/size] Write=330 Rewrite=230[size=-2]±14[/size] [size=-2](MB/sec)[/size] Latency: 260,780,2100 [size=-2](ms)[/size]
I expected the drive to perform similarly to its comrades in the pool, perhaps better. The drives are all "fake 512 byte sector" WD drives, and I have correctly configured the pool (ashift=12). Have I failed to do so for the UFS device? Or missed a configuration step? Or does a large ZFS pool make such good use of parallel IO that it's just that much more efficient than an isolated drive?
I tried to follow guidance using
bsdlabel
, but learned that it does not support drives over 2 TB, and was pointed back to gpart
. I tried just using newfs
:
Code:
gpart destroy -F da10
dd if=/dev/zero of=/dev/da10 bs=4096 count=1
newfs -U -f 4096 /dev/da10
Drive characteristics:
Code:
[CMD]diskinfo -v da10[/CMD]
da10
512 # sectorsize
3000592982016 # mediasize in bytes (2.7T)
5860533168 # mediasize in sectors
4096 # stripesize
0 # stripeoffset
364801 # Cylinders according to firmware.
255 # Heads according to firmware.
63 # Sectors according to firmware.
WD-WCC1T0860025 # Disk ident.
Last edited by a moderator: