Hi. Im experimenting with ZFS (FreeBSD 8.0-p0) in vmware 6.5 box.
Box haves 4 virtual disks 1G each. They gathered in RAIDZ1 pool and i experiment with them in 'survivability'.
- 1 disk removed. 'zpool status' says array is DEGRADED and disk is UNAVAIL. 'zpool replace' (with another disk) fixes it okay, so no problem.
- 2 disks removed. 'zpool status' says FAULTED. Its ok cus its raidz1
- 1 disk replaced. 'zpool replace' fixes it ok.
- 2 disks replaced. Cant be fixed, its ok.
Now i do this (with correct 4 disk raidz1): shutdown, go vmware settings. Remove disks 3 and 4. Create new disk, so it becomes disk3 (da2). Now when i boot and say 'zpool status' - it freezes:
Waited atleast 30min - no progress. No IO actions is going in this moment. ctrl-c, sig term, sig kill - no action! Seems zpool process cant be killed at all. It blocks da{1,2,3} and they cant be writed. zpool commands import/export/status/list/destroy - freezes same way. Im not saying the raid must survieve, i say zfs system becomes uncontrollabe
I found the way how to re-init this: reboot, dont cmd zpool, remove /boot/zfs/zpool.cache (this kills all zpool configs), dd if=/dev/zero of=/dev/daX bs=5m count=1. Now reboot and it will release old disks and not freeze anymore.. Without dd it says da0 is used somewhere(old raid) and he cant create new raidz..
Have anyone seen this issue before?
Box haves 4 virtual disks 1G each. They gathered in RAIDZ1 pool and i experiment with them in 'survivability'.
- 1 disk removed. 'zpool status' says array is DEGRADED and disk is UNAVAIL. 'zpool replace' (with another disk) fixes it okay, so no problem.
- 2 disks removed. 'zpool status' says FAULTED. Its ok cus its raidz1
- 1 disk replaced. 'zpool replace' fixes it ok.
- 2 disks replaced. Cant be fixed, its ok.
Now i do this (with correct 4 disk raidz1): shutdown, go vmware settings. Remove disks 3 and 4. Create new disk, so it becomes disk3 (da2). Now when i boot and say 'zpool status' - it freezes:
Code:
# zpool status
Jan 5 15:36:01 root: ZFS: vdev failure, zpool=test type=vdev.open_failed
Jan 5 15:36:01 root: ZFS: vdev failure, zpool=test type=vdev.bad_label
I found the way how to re-init this: reboot, dont cmd zpool, remove /boot/zfs/zpool.cache (this kills all zpool configs), dd if=/dev/zero of=/dev/daX bs=5m count=1. Now reboot and it will release old disks and not freeze anymore.. Without dd it says da0 is used somewhere(old raid) and he cant create new raidz..
Have anyone seen this issue before?