I'm currently running the June 13 -current; appears that it has some new code in ufs_dirhash (as was posted in the freebsd-current mailing list in May) related to vm_lowmem event handling. Currently, I had to disable UFS_DIRHASH in my kernel.
The problem is that I've built a test system to check ZFS stability in -CURRENT since it's known to be unstable in 7.2 (processes getting stuck in zio state with kmem running way too low in the end under heavy load) while I really want to use ZFS in production (and can't afford panics even of only once a month).
So, two days ago I started bonnie++ on zpool (with the system being well over 2 days uptime) while also running postgresql and mysql servers jailed in ZFS (with some load from my websites) and got a panic, but not in any way related to ZFS.
"reboot after panic: dirhash: NULL hash on list"
I guess the new dirhash patch's event handler was called because of ZFS producing heavy load on kmem - and while there were no processes stuck in zio state and everything was going well, dirhash panicked with such a weird reason.
(Couldn't get a core dump because of the swap space being on a GEOM_MIRROR device.)
I still can't reproduce this, but I guess it's possible under some heavy load.
The problem is that I've built a test system to check ZFS stability in -CURRENT since it's known to be unstable in 7.2 (processes getting stuck in zio state with kmem running way too low in the end under heavy load) while I really want to use ZFS in production (and can't afford panics even of only once a month).
So, two days ago I started bonnie++ on zpool (with the system being well over 2 days uptime) while also running postgresql and mysql servers jailed in ZFS (with some load from my websites) and got a panic, but not in any way related to ZFS.
"reboot after panic: dirhash: NULL hash on list"
I guess the new dirhash patch's event handler was called because of ZFS producing heavy load on kmem - and while there were no processes stuck in zio state and everything was going well, dirhash panicked with such a weird reason.
(Couldn't get a core dump because of the swap space being on a GEOM_MIRROR device.)
I still can't reproduce this, but I guess it's possible under some heavy load.