Hi again,
Further to my previous posts, I am seeing issues over time with zfs performance.
If I leave the system running for 2+ weeks and do the following benchmarks I get:
[andy@MAINSERVER ~]$ sudo bonnie++ -d /storage/temp/dir -u 0:0 -s 20g
...
Version 1.96 ------Sequential...
A further update. I scrub this array on a weekly basis and get a cron job to mail me 7 hours (11am) after it started. This is last weeks mail:
pool: storage
state: ONLINE
scan: scrub in progress since Sun Apr 29 04:00:02 2012
2.74T scanned out of 3.27T at 114M/s, 1h21m to go
0...
Just as an update to this, I tried running 3x reads from local disks and write to the array simultaneously (all 7 disks are across 2x Adaptec 1430SA controllers) and got this:
dd if=/dev/ada3 of=/storage/testing bs=1024000 count=10000 &
dd if=/dev/ada0 of=/storage/testing1 bs=1024000...
Of course... that makes sense.
Ahh rats... I wanted to keep the labels, but you are right, you lose the labels. I now have adaXp1 as my drives.
Is there any way to get the system to see them as labels without having to rebuild the array (nearly finished transferring back the files, but if I...
PS. The dd command from /dev/random showed no improvement in performance but more realistic benchmarks (bonnie++) seemed to show significant improvements.
Post #6 says it all really - from bare disks to an array with 4096 aligned blocks. AFAICT, the commands do:
gpart create creates a geometry scheme for each disk (i.e. enables them to be accessed through gpart and they exist as devices in /dev/gpt/...)
gpart add adds a partition starting at...
Yes, they are, so I took the plunge.
bonnie++ benchmarks from before the rework (I have 10G of memory in the server):
# bonnie++ -d /storage/dir -u 0:0 -s 20g
...
Version 1.96 -------Sequential Output------- --Sequential Input-- --Random-
Concurrency 1 -Per Chr- --Block--...
Thanks. Is there any way to get the drives to appear as 4096 byte block devices without destroying the ZFS pool? I think I have enough spare space to move stuff around but....
Hi all,
Hopefully you can help. I have four Samsung 2TB drives in a RAIDZ array. They are given to zfs as complete disks, meaning their stripe offset is 0.
diskinfo -v /dev/ada3
/dev/ada3
512 # sectorsize
2000398934016 # mediasize in bytes (1.8T)...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.