ZFS Help: zpool gone after gpart recover

Hi there,
I have a usb backup disk with a single zfs pool. After a hanger on the usb bus and a clean reboot of the system (FreeBSD 10.2) the system startet with
Code:
the primary GPT table is corrupt or invalid.
using the secondary instead -- recovery strongly advised.
gpart list da0
listed all partitions and as state "damaged"
so I ran
gpart recover /dev/da0
with executed without any error messages
After a reboot of the system the pool was not mounted. This is what gpart listes:
gpart list da0
Code:
Geom name: da0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 15628053133
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da0p1
   Mediasize: 134217728 (128M)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 1024
   Mode: r0w0e0
   rawuuid: 9ecada67-c562-4fed-801f-15ccbcb13b67
   rawtype: e3c9e316-0b5c-4db8-817d-f92df00215ae
   label: Microsoft reserved partition
   length: 134217728
   offset: 17408
   type: ms-reserved
   index: 1
   end: 262177
   start: 34
2. Name: da0p2
   Mediasize: 8001427603456 (7.3T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   rawuuid: c681ea22-19ab-4e2c-a10b-ac198172961b
   rawtype: ebd0a0a2-b9e5-4433-87c0-68b6b72699c7
   label: Basic data partition
   length: 8001427603456
   offset: 135266304
   type: ms-basic-data
   index: 2
   end: 15628052479
   start: 264192
Consumers:
1. Name: da0
   Mediasize: 8001563221504 (7.3T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
And here is the output of zpool:
zpool status -x
Code:
  pool: BackupDisk2015_02
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
   replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://illumos.org/msg/ZFS-8000-3C
  scan: none requested
config:

   NAME                    STATE     READ WRITE CKSUM
   BackupDisk2015_02       UNAVAIL      0     0     0
     17939114011953584277  UNAVAIL      0     0     0  was /dev/diskid/DISK-NA7DRQC8

  pool: BackupDisk2016_01
 state: FAULTED
status: One or more devices could not be used because the label is missing
   or invalid.  There are insufficient replicas for the pool to continue
   functioning.
action: Destroy and re-create the pool from
   a backup source.
   see: http://illumos.org/msg/ZFS-8000-5E
  scan: none requested
config:

   NAME                   STATE     READ WRITE CKSUM
   BackupDisk2016_01      FAULTED      0     0     0
     5348198291387904699  UNAVAIL      0     0     0  was /dev/gpt/Basic%20data%20partition
How can I recover the partition without losing any data?

Best regards,
Mike
 
Are you sure you have the right disk? The partition table only shows two Microsoft partitions and no freebsd-zfs partitions.
 
Are you sure you have the right disk? The partition table only shows two Microsoft partitions and no freebsd-zfs partitions.
Yes, without any doubt. The system is sitting on two 250 GB internal disks attached to an internal raid controller and there is only one external disk. Any disk scans I could try to fix this?
 
You have two conflicting ZFS labels on the disk, first one is a pool using the whole disk /dev/diskid/DISK-NA7DRQC8, the second one is a pool using the first GPT partition of the same disk /dev/gpt/Basic%20data%20partition. Which one these was the last working pool?
 
You have two conflicting ZFS labels on the disk, first one is a pool using the whole disk /dev/diskid/DISK-NA7DRQC8, the second one is a pool using the first GPT partition of the same disk /dev/gpt/Basic%20data%20partition. Which one these was the last working pool?
While I´m not absolutly sure about this I would say it´s the one that covers the whole disk as I do so with single disk pools. I normally use
zpool create [I]Volumename /dev/VolumeID[/I]
On the other hand the pool contains some datasets, may be this is confusing the gpart output?
 
If ZFS used the whole disk there shouldn't be a partition table. Perhaps that's a left over from a previous configuration?
 
If ZFS used the whole disk there shouldn't be a partition table. Perhaps that's a left over from a previous configuration?
Could be. It´s a Seagate Backup Plus 8 TB which came preformated but was exclusivley used with the freebsd server and with the zfs pool. It might be an left over from Seagate´s formating but was never used as a MS disk.
 
The problem now is that the "repair" of the partition table (which shouldn't have been there) may have corrupted ZFS even further as it's likely it overwrote the ZFS data.
 
The problem now is that the "repair" of the partition table (which shouldn't have been there) may have corrupted ZFS even further as it's likely it overwrote the ZFS data.
If so - is there any chance to rebuild it from the rest of the disk?
 
Not sure really, I suppose it's going to take some digging with zdb(8). But I never bothered with it, I normally just nuke the filesystem and restore from backups. But I guess that's not an option for you.
 
Not sure really, I suppose it's going to take some digging with zdb(8). But I never bothered with it, I normally just nuke the filesystem and restore from backups. But I guess that's not an option for you.
No, I would like to avoid nukeing the filesystem. While it´s a backup disk it holds the data files of some bacula backup jobs. And while the bacula configs and databases are all well on the system pool I would like to avoid because that would mean I would also have to start over with bacula...
 
Back
Top