Actually I get the same behaviour with "tx->tx" lockup for a long while on the real servers as well but that at least recovers from it. I haven't gotten the same issue yet of where it's impossible to import the pool again.
However when split-brain occurs, hastctl says 1.8TB of "dirty" instead...
Narrowing down my issue:
In my virtual Xen environment I get some kind of deadlock with state "tx->tx" and 99.8% idle if I do these steps, also all zfs commands stop working and I'm unable to import the pool again even after a reset of the guest machine:
dd if=/dev/urandom of=./foo bs=100M...
In my recent tests I've simply done ifconfig down or hard reset while a client is copying files to it via NFS.
More than once I've gotten metadata corruption and errors which when trying zfs import -F tells me to restore the pool from a backup and refuses to import it.
Seems a bit sketchy to...
hast or not hast
I've been using a version of this guide to set up my own replication testing in two Xen guests. Having disk1 and disk2 set up in HAST I've created a pool with mirror devices. This works most of the time, and all of the time if everything is shut down cleanly.
But for testing...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.