Investigation:
Once upon a time I had mirrored pool called zroot. Once a drive in the pool become work very-very slow (notorious problem of WD Caviar Green 15EADS). I did a zpool detach of this disk and continued to work.
A few weeks later I bought a new WD 10EARS (4k-drive). I marked out the drive through gnop aligned 4Kb. Using the tar, I transferred all files from the old pool - zroot - to the new - tank (actually the old drive to new one). Boot from the new disc, I was convinced that everything is in order.
Then I could hardly remove the first pool, re-partition old disk and append it to the new pool as a mirror and waited resilver.
Then, in the zpool status output, I saw that first drive was attached by gptid. I thought it was ugly, disconnected it from the pool, glabel it and add to the pool again by the label, then waiting resilver.
After rebooting, the pool was FAULTED.
Current status:
Code:
# gpart show
=> 34 2930277101 ada0 GPT (1.4T)
34 128 1 freebsd-boot (64k)
162 1886 - free - (943k)
2048 8388608 2 freebsd-swap (4.0G)
8390656 2921886479 3 freebsd-zfs (1.4T)
=> 34 2930277101 ada1 GPT (1.4T)
34 6 - free - (3.0k)
40 128 1 freebsd-boot (64k)
168 1880 - free - (940k)
2048 8388608 2 freebsd-swap (4.0G)
8390656 2921886472 3 freebsd-zfs (1.4T)
2930277128 7 - free - (3.5k)
# zdb -eX
Configuration for import:
vdev_children: 1
version: 28
pool_guid: 1487408506722295672
name: 'tank'
state: 0
vdev_tree:
type: 'root'
id: 0
guid: 1487408506722295672
children[0]:
type: 'mirror'
id: 0
guid: 14949755914536867893
whole_disk: 0
metaslab_array: 30
metaslab_shift: 33
ashift: 12
asize: 1496001019904
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 12181262761882874517
path: '/dev/gpt/disk0'
phys_path: '/dev/gpt/disk0'
whole_disk: 1
[B]DTL: 209[/B]
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 10476129156643193055
phys_path: '/dev/gpt/disk1'
whole_disk: 1
[B]DTL: 212[/B]
create_txg: 4
[B]resilvering: 1[/B]
path: '/dev/dsk/gpt/disk1'
[I]zdb: can't open 'tank': no such directory[/I]
# zpool import
pool: tank
id: 1487408506722295672
state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
see: http://www.sun.com/msg/ZFS-8000-5E
config:
tank FAULTED corrupted data
mirror-0 ONLINE
12181262761882874517 UNAVAIL corrupted data
10476129156643193055 UNAVAIL corrupted data
# zpool import -fFX -o ro
not help
As i can see, all labels are also fine. It can be found on
pastebin.
Three questions:
1. Any help? I have no backups. Boot from the pool before mountroot possible (loader can read the kernel).
2. Why
# sysctl vfs.zfs.debug=1
really enable debug on geom, not on zfs?
3. What's next?