jyavenard said:Ok, here I'm using a SSD (Intel X25-M drive)
So I would assume that writing speed on those is much greater than the WD 2TB Green RE3
jyavenard said:Well, here I have a 40GB SSD drive, my plan was to use 8GB for the ZIL partition, and 32GB for the cache..
Well, that's my point, I can either create two slices, or a slice with two partitions.
Now, if Solaris isn't going to be able to read my two slices, I have a problem and considering the nightmare I went through yesterday when FreeBSD failed miserably when I removed the log device, and the only thing that saved me was booting OpenIndiana and re-importing the disk there.. I surely want my ZFS system to work with Solaris/OI *just in facse*
jyavenard said:I did, I can't say I saw much difference, timing varied so much with v14, from 30s to 55s so it's hard to say.
1) Set vfs.zfs.recover=1 at the loader prompt (OK set vfs.zfs.recover=1)
2) Boot into single user mode without opensolaris.ko and zfs.ko loaded
3) ( mount -w / ) to make sure you can remove and also write new
zpool.cache as needed.
3) Remove /boot/zfs/zpool.cache
4) kldload both zfs and opensolaris i.e. ( kldload zfs ) should do the trick
5) verify that vfs.zfs.recover=1 is set then ( zpool import pool )
6) Give it a little bit monitor activity using Ctrl+T to see activity.
zpool export pool
zpool import pool
danbi said:PS: The v28 code is highly experimental. Perhaps some of the console logs will help developers figure out what happened to your pool.
Hi
On 27 December 2010 16:04, jhell <jhell@dataix.net> wrote:
> 1) Set vfs.zfs.recover=1 at the loader prompt (OK set vfs.zfs.recover=1)
> 2) Boot into single user mode without opensolaris.ko and zfs.ko loaded
> 3) ( mount -w / ) to make sure you can remove and also write new
> zpool.cache as needed.
> 3) Remove /boot/zfs/zpool.cache
> 4) kldload both zfs and opensolaris i.e. ( kldload zfs ) should do the trick
> 5) verify that vfs.zfs.recover=1 is set then ( zpool import pool )
> 6) Give it a little bit monitor activity using Ctrl+T to see activity.
Ok..
I've got into the same situation again, no idea why this time.
I've followed your instructions, and sure enough I could do an import of my pool again.
However, wanted to find out what was going on..
So I did:
zpool export pool
followed by zpool import
And guess what ... hanged zpool again.. can't Ctrl-C it, have to reboot..
So here we go again.
Rebooted as above.
zpool import pool -> ok
this time, I decided that maybe that what was screwing things up was the cache.
zpool remove pool ada1s2 -> ok
zpool status:
# zpool status
pool: pool
state: ONLINE
scan: scrub repaired 0 in 18h20m with 0 errors on Tue Dec 28 10:28:05 2010
config:
NAME STATE READ WRITE CKSUM
pool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ada2 ONLINE 0 0 0
ada3 ONLINE 0 0 0
ada4 ONLINE 0 0 0
ada5 ONLINE 0 0 0
ada6 ONLINE 0 0 0
ada7 ONLINE 0 0 0
logs
ada1s1 ONLINE 0 0 0
errors: No known data errors
# zpool export pool -> ok
# zpool import pool -> ok
# zpool add pool cache /dev/ada1s2 -> ok
# zpool status
pool: pool
state: ONLINE
scan: scrub repaired 0 in 18h20m with 0 errors on Tue Dec 28 10:28:05 2010
config:
NAME STATE READ WRITE CKSUM
pool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ada2 ONLINE 0 0 0
ada3 ONLINE 0 0 0
ada4 ONLINE 0 0 0
ada5 ONLINE 0 0 0
ada6 ONLINE 0 0 0
ada7 ONLINE 0 0 0
logs
ada1s1 ONLINE 0 0 0
cache
ada1s2 ONLINE 0 0 0
errors: No known data errors
# zpool export pool -> ok
# zpool import
load: 0.00 cmd: zpool 405 [spa_namespace_lock] 15.11r 0.00u 0.03s 0% 2556k
load: 0.00 cmd: zpool 405 [spa_namespace_lock] 15.94r 0.00u 0.03s 0% 2556k
load: 0.00 cmd: zpool 405 [spa_namespace_lock] 16.57r 0.00u 0.03s 0% 2556k
load: 0.00 cmd: zpool 405 [spa_namespace_lock] 16.95r 0.00u 0.03s 0% 2556k
load: 0.00 cmd: zpool 405 [spa_namespace_lock] 32.19r 0.00u 0.03s 0% 2556k
load: 0.00 cmd: zpool 405 [spa_namespace_lock] 32.72r 0.00u 0.03s 0% 2556k
load: 0.00 cmd: zpool 405 [spa_namespace_lock] 40.13r 0.00u 0.03s 0% 2556k
ah ah !
it's not the separate log that make zpool crash, it's the cache !
Having the cache in prevent from importing the pool again....
rebooting: same deal... can't access the pool any longer !
Hopefully this is enough hint for someone to track done the bug ...
[jeanyves_avenard@ /]$ zpool status
load: 0.00 cmd: zpool 411 [spa_namespace_lock] 3.06r 0.00u 0.00s 0% 2068k
load: 0.00 cmd: zpool 411 [spa_namespace_lock] 3.91r 0.00u 0.00s 0% 2068k
load: 0.00 cmd: zpool 411 [spa_namespace_lock] 4.29r 0.00u 0.00s 0% 2068k