Epikurean said:I tried the suggested patch, but unfortunately it killed my ZFS Pool:
Code:pool: tank state: UNAVAIL scrub: none requested config: NAME STATE READ WRITE CKSUM tank UNAVAIL 0 0 0 insufficient replicas raidz1 UNAVAIL 0 0 0 corrupted data ad6 ONLINE 0 0 0 ad10 ONLINE 0 0 0 ad12 ONLINE 0 0 0
Believe me I would have avoided the 4K drives if I knew then what I know now, but most of us in this thread already invested in 4K drives and are stuck with them. As such we are trying to find workarounds to make these 4K drives behave adequately. I see that you repeatedly slam against these 4K drives but otherwise are not offering any real help.wonslung said:it won't work because of the variable block size ZFS uses for RAIDZ.
until firmware updates come out, aviod those drives for raidz
palmboy5 said:Believe me I would have avoided the 4K drives if I knew then what I know now, but most of us in this thread already invested in 4K drives and are stuck with them.
palmboy5 said:To who, and with what monetary loss? It would have to sell for less than the purchase price AND cost more in shipping it. Not practical.
vermaden said:It seems that this little patch can 'fix' issues with 4k WD Green drives:
http://lists.freebsd.org/pipermail/freebsd-fs/2010-October/009706.html
[/color]Code:[B]/usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c[/B] [color="Red"]-*ashift = highbit(MAX(pp->sectorsize, SPA_MINBLOCKSIZE)) - 1;[/color] [color="Green"]+*ashift = highbit(MAX(MAX(4096, pp->sectorsize), SPA_MINBLOCKSIZE)) - 1;
phoenix said:Is this patch along the same lines as this one:
http://www.solarismen.de/archives/5-Solaris-and-the-new-4K-Sector-Disks-e.g.-WDxxEARS-Part-2.html
They both deal with ashift, to set the minimum recordsize for the pool to 4 KB, but they are done in two very different places in the code.
(I've posted a reply to that message to find out.)
$ dd if=/dev/random of=./testfile bs=1m count=500
After following Palmboy's suggestion, here is what I did
1. Created a new ZFS Pool
2. gnop create -S 4096 on each drive at the same time
3. copied some data on the pool
4. compiled and installed a new kernel with the patch
5. reboot
As expected, the *.nop drives are "lost" after a reboot, BUT the ZFS Pool is in perfect shape!
The only thing that bugs me, is that there was no message whatoever indicating the use of the .nop drives in the ZFS pool: no degraded state, no indication, that the *.nop drives are used when entering "zpool status" (besides when I wanted to replace my adX drive with the adX.nop drive, which didn't work: ZFS told me the .nop drive was already used).
raab said:Does anyone have any before/after performance stats after applying this patch?
Having just bought 6 WD20EARS then coming across this issue I want to know if its worth applying the patch or just selling them and getting non 4k drives
disks type recordsize / ( disks - parity disks ) sector status
raidz1 128KiB / 2 64KiB good
raidz1 128KiB / 3 43KiB BAD
raidz2 128KiB / 2 64KiB good
raidz1 128KiB / 4 32KiB good
raidz2 128KiB / 4 32KiB good
raidz1 128KiB / 8 16KiB good
raidz2 128KiB / 8 16KiB good
phoenix said:Is this patch along the same lines as this one:
http://www.solarismen.de/archives/5-Solaris-and-the-new-4K-Sector-Disks-e.g.-WDxxEARS-Part-2.html
They both deal with ashift, to set the minimum recordsize for the pool to 4 KB, but they are done in two very different places in the code.
(I've posted a reply to that message to find out.)
gnop create -S 4096 adaX
zdb <poolname>
Does this mean that one would only need to gnop create -S 4096 one of the drives in order to force ZFS to do 4K on all?
files-backup# mdconfig -a -t malloc -s 100M -S 512
md2
files-backup# mdconfig -a -t malloc -s 100M -S 4096
md3
files-backup#
files-backup#
files-backup# zpool create test mirror md2 md3 # <- 512b disk is specified first
files-backup# zdb |grep 'ashift'
ashift=12
files-backup# zpool destroy test
files-backup#
files-backup# zpool create test md2
files-backup# zdb | grep 'ashift'
ashift=9
files-backup# zpool attach test md2 md3
cannot attach md3 to md2: devices have different sector alignment
files-backup#
gsect -S 512 /dev/my4Ksectdrive
/dev/my4Ksectdrive (4K sectorsize)
/dev/my4Ksectdrive.sect (512B sectorsize)