A
Anonymous
Guest
Just tell me upfront, am I SOL?
I'll admit, I'm new to zfs and I didn't RTFM. I've been using it for a little over a year, but our FreeBSD 8.0 system crashed on me, and I lost some binaries including zfs tools. I tried fixing with Fixit but had no such luck so I rebuilt world and kernel on a fresh hard drive. The old system had zpool raidz containing da0 and da1 (which are actually two links to an array of 16 drives) and I needed to remount these in a hurry. I asked the guy who had first set it up for us (not me) for some advice and all I could get out of him was "Google it" after I had already been doing so for 4 hours.
Unfortunately I started acting on my Google finds before I fully knew what I was doing. I started trying to "mount" the drives...
I know now that I should have been using zpool import, but before I had come to such a revelation I had already tried to force a mount of da0, creating a new "tank" pool (same name as the old one. In doing this I have removed da0 from the original tank pool, creating a new one. When I realized that wasn't what I had wanted it was already too late. I've since destroyed that pool and this is where we stand:
if I list destroyed pools:
if I debug each drive...
bad drive:
and good drive:
Notice the different hostnames? The good drive (da1) still says the old hostname "zproj" and the "bad" drive (da0) says the new hostname "proj"
Can anyone tell me if my ignorance (and the lack of professional assistance) have totally screwed me here? Since they're a RAID is there any chance I may still be able to recover this data? Better yet, is there a way to get this pool back together to its former glory?
I really need some guidance with this. Any help is greatly appreciated.
I'll admit, I'm new to zfs and I didn't RTFM. I've been using it for a little over a year, but our FreeBSD 8.0 system crashed on me, and I lost some binaries including zfs tools. I tried fixing with Fixit but had no such luck so I rebuilt world and kernel on a fresh hard drive. The old system had zpool raidz containing da0 and da1 (which are actually two links to an array of 16 drives) and I needed to remount these in a hurry. I asked the guy who had first set it up for us (not me) for some advice and all I could get out of him was "Google it" after I had already been doing so for 4 hours.
Unfortunately I started acting on my Google finds before I fully knew what I was doing. I started trying to "mount" the drives...
I know now that I should have been using zpool import, but before I had come to such a revelation I had already tried to force a mount of da0, creating a new "tank" pool (same name as the old one. In doing this I have removed da0 from the original tank pool, creating a new one. When I realized that wasn't what I had wanted it was already too late. I've since destroyed that pool and this is where we stand:
Code:
# zpool import
pool: tank
id: 4433502968625883981
state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
see: http://www.sun.com/msg/ZFS-8000-5E
config:
tank UNAVAIL insufficient replicas
da1 ONLINE
if I list destroyed pools:
Code:
# zpool import -D
pool: tank
id: 12367720188787195607
state: ONLINE (DESTROYED)
action: The pool can be imported using its name or numeric identifier.
config:
tank ONLINE
da0 ONLINE
if I debug each drive...
bad drive:
Code:
# zdb -l /dev/da0
--------------------------------------------
LABEL 0
--------------------------------------------
version=13
name='tank'
state=2
txg=50
pool_guid=12367720188787195607
hostid=2180312168
hostname='proj.bullseye.tv'
top_guid=6830294387039432583
guid=6830294387039432583
vdev_tree
type='disk'
id=0
guid=6830294387039432583
path='/dev/da0'
whole_disk=0
metaslab_array=23
metaslab_shift=36
ashift=9
asize=6998387326976
is_log=0
--------------------------------------------
LABEL 1
--------------------------------------------
version=13
name='tank'
state=2
txg=50
pool_guid=12367720188787195607
hostid=2180312168
hostname='proj.bullseye.tv'
top_guid=6830294387039432583
guid=6830294387039432583
vdev_tree
type='disk'
id=0
guid=6830294387039432583
path='/dev/da0'
whole_disk=0
metaslab_array=23
metaslab_shift=36
ashift=9
asize=6998387326976
is_log=0
--------------------------------------------
LABEL 2
--------------------------------------------
version=13
name='tank'
state=2
txg=50
pool_guid=12367720188787195607
hostid=2180312168
hostname='proj.bullseye.tv'
top_guid=6830294387039432583
guid=6830294387039432583
vdev_tree
type='disk'
id=0
guid=6830294387039432583
path='/dev/da0'
whole_disk=0
metaslab_array=23
metaslab_shift=36
ashift=9
asize=6998387326976
is_log=0
--------------------------------------------
LABEL 3
--------------------------------------------
version=13
name='tank'
state=2
txg=50
pool_guid=12367720188787195607
hostid=2180312168
hostname='proj.bullseye.tv'
top_guid=6830294387039432583
guid=6830294387039432583
vdev_tree
type='disk'
id=0
guid=6830294387039432583
path='/dev/da0'
whole_disk=0
metaslab_array=23
metaslab_shift=36
ashift=9
asize=6998387326976
is_log=0
Code:
# zdb -l /dev/da1
--------------------------------------------
LABEL 0
--------------------------------------------
version=13
name='tank'
state=0
txg=4
pool_guid=4433502968625883981
hostid=2180312168
hostname='zproj.bullseye.tv'
top_guid=11718615808151907516
guid=11718615808151907516
vdev_tree
type='disk'
id=1
guid=11718615808151907516
path='/dev/da1'
whole_disk=0
metaslab_array=23
metaslab_shift=36
ashift=9
asize=7001602260992
is_log=0
--------------------------------------------
LABEL 1
--------------------------------------------
version=13
name='tank'
state=0
txg=4
pool_guid=4433502968625883981
hostid=2180312168
hostname='zproj.bullseye.tv'
top_guid=11718615808151907516
guid=11718615808151907516
vdev_tree
type='disk'
id=1
guid=11718615808151907516
path='/dev/da1'
whole_disk=0
metaslab_array=23
metaslab_shift=36
ashift=9
asize=7001602260992
is_log=0
--------------------------------------------
LABEL 2
--------------------------------------------
version=13
name='tank'
state=0
txg=4
pool_guid=4433502968625883981
hostid=2180312168
hostname='zproj.bullseye.tv'
top_guid=11718615808151907516
guid=11718615808151907516
vdev_tree
type='disk'
id=1
guid=11718615808151907516
path='/dev/da1'
whole_disk=0
metaslab_array=23
metaslab_shift=36
ashift=9
asize=7001602260992
is_log=0
--------------------------------------------
LABEL 2
--------------------------------------------
version=13
name='tank'
state=0
txg=4
pool_guid=4433502968625883981
hostid=2180312168
hostname='zproj.bullseye.tv'
top_guid=11718615808151907516
guid=11718615808151907516
vdev_tree
type='disk'
id=1
guid=11718615808151907516
path='/dev/da1'
whole_disk=0
metaslab_array=23
metaslab_shift=36
ashift=9
asize=7001602260992
is_log=0
--------------------------------------------
LABEL 3
--------------------------------------------
version=13
name='tank'
state=0
txg=4
pool_guid=4433502968625883981
hostid=2180312168
hostname='zproj.bullseye.tv'
top_guid=11718615808151907516
guid=11718615808151907516
vdev_tree
type='disk'
id=1
guid=11718615808151907516
path='/dev/da1'
whole_disk=0
metaslab_array=23
metaslab_shift=36
ashift=9
asize=7001602260992
is_log=0
Notice the different hostnames? The good drive (da1) still says the old hostname "zproj" and the "bad" drive (da0) says the new hostname "proj"
Can anyone tell me if my ignorance (and the lack of professional assistance) have totally screwed me here? Since they're a RAID is there any chance I may still be able to recover this data? Better yet, is there a way to get this pool back together to its former glory?
I really need some guidance with this. Any help is greatly appreciated.