zfs raidz locks up immediately

Some guidance into determining what is happening and what can be done would be very appreciated. the situation:

Initially, created a raidz (I believe it was raidz) on 4 x 1.5TB drives from a separate FreeBSD 8.0R os drive (I believe this was the version used).

Installed FreeBSD 8.1R to a new os drive, removed old os drive, imported zpools and started zfs scrub. Had to shut down the host (not sure if a zpool -s scrub was run before shutdown). Booting the host back up locks up once zpool/zfs is called. Never returns a shell once any zpool/zfs command is run. Ctrl+c doesn't break out of it

Also tried the above paragraph with FreeBSD 8.2R.

I would be happy to run any commands that might provide useful data.
 
I also noticed these issues on 4 different servers (supermicro with adaptec/intel onboard sata. 6 disk in raidz2) - some of them stably for a long time on FreeBSD 8.0. After upgrading to FreeBSD 8.2-RELEASE and import the new version of zpool, the server was one random time a day to disappear - without any messages in log or console - just freeze. After each return, I run zpool scrub - it goes without problems.
 
I had similar issues before. I got a tip from an Oracle tech to try with Solaris 11 express, which basically is a newer OpenSolaris- without the "Open" part=)

I booted up live, imported the pool, was able to scrub clean and then go back to BSD.

Hope it works!

/Sebulon
 
Got similar bug year ago
Make zfs raidz1 with 4+ disks.
Shutdown system. Replace 1 disk with new one and remove other. So, we removed 2 disks from pool and its broken. Boot and command `zpool status` - it freezes. Need to say it take effect with other commands: import/export/destroy, so actually pools become uncontrollable..
kern/142563

arkive, you can try to fix this my renaming(or moving to other place) zfs cache file from /etc/zfs/.. Then reboot.. But after this you should re-import pool, or even may lose data
 
Sebulon said:
I had similar issues before. I got a tip from an Oracle tech to try with Solaris 11 express, which basically is a newer OpenSolaris- without the "Open" part=)

I booted up live, imported the pool, was able to scrub clean and then go back to BSD.

Hope it works!

/Sebulon

Do you think it's worth trying openindiana (livecd) for this?
 
Alt said:
Got similar bug year agokern/142563

arkive, you can try to fix this my renaming(or moving to other place) zfs cache file from /etc/zfs/.. Then reboot.. But after this you should re-import pool, or even may lose data

After you mv'd /etc/zfs/ were you able to run a scrub and bring the array back online?
 
What I've done so far that seems to be working (scrub is still going) with FreeBSD 8.2-RELEASE:

Code:
zpool export <poolname>
zpool import <poolname>
zpool upgrade <poolanem>
zpool scrub <poolname>

A couple of things to note: when I was trying to run the import there were messages about vdev.no_replicas along the lines of:
Code:
zfs vdev failure, zpool=<poolname> type=vdev.no_replicas

Running a zpool status showed 2 of the 4 drives as UNAVAILABLE. atacontrol list showed only 2 of the drives. I powered down the machine and rearranged some of the power cables then booted back up. All drives showed in atacontrol list and zpool status. So I resumed with the import.
 
pelmen said:
I also noticed these issues on 4 different servers (supermicro with adaptec/intel onboard sata. 6 disk in raidz2) - some of them stably for a long time on FreeBSD 8.0. After upgrading to FreeBSD 8.2-RELEASE and import the new version of zpool, the server was one random time a day to disappear - without any messages in log or console - just freeze. After each return, I run zpool scrub - it goes without problems.

We had similar/same problems, it was a bug and it is fixed now.

http://forums.freebsd.org/showthread.php?t=21147
 
Back
Top