I am an idiot.
I have no backup. Nothing critically important, but 6 TB of personal data I would sorely like to recover. I solemnly swear to create real backups first, before anything else, if I can get this working again.
I'm a newbie to FreeBSD. This new FreeBSD system is replacing an old Debian 6 server, which was running ZFSonLinux. I installed FreeBSD 11.0-RELEASE, set up the new storage pool (encrypted 10-disk RAID-Z2), transferred my data over (rsync, not zfs send/recv), ran for a few days happily, got rid of the old hardware containing the old pool.
The first time I rebooted the system, I did the
Happened to install the lsof package today, which printed:
Which is when I realized I needed to run updates. Duh. So I look it up in the handbook again, run
The outputs for
IT help desk instincts kicked in, so I tried a simple reboot. Nope, same thing. I even tried running
Any ideas? Any more information I can provide?
I have no backup. Nothing critically important, but 6 TB of personal data I would sorely like to recover. I solemnly swear to create real backups first, before anything else, if I can get this working again.
I'm a newbie to FreeBSD. This new FreeBSD system is replacing an old Debian 6 server, which was running ZFSonLinux. I installed FreeBSD 11.0-RELEASE, set up the new storage pool (encrypted 10-disk RAID-Z2), transferred my data over (rsync, not zfs send/recv), ran for a few days happily, got rid of the old hardware containing the old pool.
The first time I rebooted the system, I did the
geli attach
commands but could not run zpool import -a
. The new pool, "tank", was already present. However, I had to run zpool clear tank
to make it available. I believe it was complaining about there not being a separate log device. I figured it was a side effect of using encrypted devices and didn't think much of it. Continued using the pool no problem.Happened to install the lsof package today, which printed:
Code:
lsof: WARNING: compiled for FreeBSD release 11.0-RELEASE-p7; this is 11.0-RELEASE-p1.
Which is when I realized I needed to run updates. Duh. So I look it up in the handbook again, run
freebsd-update fetch
and freebsd-update install
. Knowing I need to reboot for the kernel update, I figure I may as well try to shutdown more gracefully than last time. I ran zpool export tank
and the geli detach commands
, then issued a reboot. After logging in again, I run the geli attach
commands, but... I can't reimport the pool.
Code:
# zpool import
pool: tank
id: 14383579345605766299
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
tank ONLINE
raidz2-0 ONLINE
da0.eli ONLINE
da1.eli ONLINE
da2.eli ONLINE
da3.eli ONLINE
da4.eli ONLINE
da5.eli ONLINE
da6.eli ONLINE
da7.eli ONLINE
da8.eli ONLINE
da9.eli ONLINE
# zpool import tank
cannot import 'tank': one or more devices is currently unavailable
# zpool import -a
cannot import 'tank': one or more devices is currently unavailable
# zpool import -F -n tank
# zpool import -f tank
cannot import 'tank': one or more devices is currently unavailable
# zdb -e -C tank
MOS Configuration:
version: 5000
name: 'tank'
state: 1
txg: 44820
pool_guid: 14383579345605766299
hostid: 807724783
hostname: 'filesrv'
com.delphix:has_per_vdev_zaps
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 14383579345605766299
create_txg: 4
children[0]:
type: 'raidz'
id: 0
guid: 4695976455601603174
nparity: 2
metaslab_array: 46
metaslab_shift: 38
ashift: 12
asize: 35562502225920
is_log: 0
create_txg: 4
com.delphix:vdev_zap_top: 35
children[0]:
type: 'disk'
id: 0
guid: 11402930581434154143
path: '/dev/da0.eli'
whole_disk: 1
create_txg: 4
com.delphix:vdev_zap_leaf: 36
children[1]:
type: 'disk'
id: 1
guid: 18121392312499277188
path: '/dev/da1.eli'
whole_disk: 1
create_txg: 4
com.delphix:vdev_zap_leaf: 37
children[2]:
type: 'disk'
id: 2
guid: 6682933295533031840
path: '/dev/da2.eli'
whole_disk: 1
create_txg: 4
com.delphix:vdev_zap_leaf: 38
children[3]:
type: 'disk'
id: 3
guid: 7049728495400098022
path: '/dev/da3.eli'
whole_disk: 1
create_txg: 4
com.delphix:vdev_zap_leaf: 39
children[4]:
type: 'disk'
id: 4
guid: 4350589431371682403
path: '/dev/da4.eli'
whole_disk: 1
create_txg: 4
com.delphix:vdev_zap_leaf: 40
children[5]:
type: 'disk'
id: 5
guid: 8394194089471178648
path: '/dev/da5.eli'
whole_disk: 1
create_txg: 4
com.delphix:vdev_zap_leaf: 41
children[6]:
type: 'disk'
id: 6
guid: 13473629487508779270
path: '/dev/da6.eli'
whole_disk: 1
create_txg: 4
com.delphix:vdev_zap_leaf: 42
children[7]:
type: 'disk'
id: 7
guid: 8293128023524559929
path: '/dev/da7.eli'
whole_disk: 1
create_txg: 4
com.delphix:vdev_zap_leaf: 43
children[8]:
type: 'disk'
id: 8
guid: 4660497218889374037
path: '/dev/da8.eli'
whole_disk: 1
create_txg: 4
com.delphix:vdev_zap_leaf: 44
children[9]:
type: 'disk'
id: 9
guid: 916718491707390357
path: '/dev/da9.eli'
whole_disk: 1
create_txg: 4
com.delphix:vdev_zap_leaf: 45
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
# zdb -l /dev/da0.eli
--------------------------------------------
LABEL 0
--------------------------------------------
version: 5000
name: 'tank'
state: 1
txg: 44820
pool_guid: 14383579345605766299
hostid: 807724783
hostname: 'filesrv'
top_guid: 4695976455601603174
guid: 11402930581434154143
vdev_children: 1
vdev_tree:
type: 'raidz'
id: 0
guid: 4695976455601603174
nparity: 2
metaslab_array: 46
metaslab_shift: 38
ashift: 12
asize: 35562502225920
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 11402930581434154143
path: '/dev/da0.eli'
whole_disk: 1
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 18121392312499277188
path: '/dev/da1.eli'
whole_disk: 1
create_txg: 4
children[2]:
type: 'disk'
id: 2
guid: 6682933295533031840
path: '/dev/da2.eli'
whole_disk: 1
create_txg: 4
children[3]:
type: 'disk'
id: 3
guid: 7049728495400098022
path: '/dev/da3.eli'
whole_disk: 1
create_txg: 4
children[4]:
type: 'disk'
id: 4
guid: 4350589431371682403
path: '/dev/da4.eli'
whole_disk: 1
create_txg: 4
children[5]:
type: 'disk'
id: 5
guid: 8394194089471178648
path: '/dev/da5.eli'
whole_disk: 1
create_txg: 4
children[6]:
type: 'disk'
id: 6
guid: 13473629487508779270
path: '/dev/da6.eli'
whole_disk: 1
create_txg: 4
children[7]:
type: 'disk'
id: 7
guid: 8293128023524559929
path: '/dev/da7.eli'
whole_disk: 1
create_txg: 4
children[8]:
type: 'disk'
id: 8
guid: 4660497218889374037
path: '/dev/da8.eli'
whole_disk: 1
create_txg: 4
children[9]:
type: 'disk'
id: 9
guid: 916718491707390357
path: '/dev/da9.eli'
whole_disk: 1
create_txg: 4
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
--------------------------------------------
LABEL 1
--------------------------------------------
version: 5000
name: 'tank'
state: 1
txg: 44820
pool_guid: 14383579345605766299
hostid: 807724783
hostname: 'filesrv'
top_guid: 4695976455601603174
guid: 11402930581434154143
vdev_children: 1
vdev_tree:
type: 'raidz'
id: 0
guid: 4695976455601603174
nparity: 2
metaslab_array: 46
metaslab_shift: 38
ashift: 12
asize: 35562502225920
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 11402930581434154143
path: '/dev/da0.eli'
whole_disk: 1
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 18121392312499277188
path: '/dev/da1.eli'
whole_disk: 1
create_txg: 4
children[2]:
type: 'disk'
id: 2
guid: 6682933295533031840
path: '/dev/da2.eli'
whole_disk: 1
create_txg: 4
children[3]:
type: 'disk'
id: 3
guid: 7049728495400098022
path: '/dev/da3.eli'
whole_disk: 1
create_txg: 4
children[4]:
type: 'disk'
id: 4
guid: 4350589431371682403
path: '/dev/da4.eli'
whole_disk: 1
create_txg: 4
children[5]:
type: 'disk'
id: 5
guid: 8394194089471178648
path: '/dev/da5.eli'
whole_disk: 1
create_txg: 4
children[6]:
type: 'disk'
id: 6
guid: 13473629487508779270
path: '/dev/da6.eli'
whole_disk: 1
create_txg: 4
children[7]:
type: 'disk'
id: 7
guid: 8293128023524559929
path: '/dev/da7.eli'
whole_disk: 1
create_txg: 4
children[8]:
type: 'disk'
id: 8
guid: 4660497218889374037
path: '/dev/da8.eli'
whole_disk: 1
create_txg: 4
children[9]:
type: 'disk'
id: 9
guid: 916718491707390357
path: '/dev/da9.eli'
whole_disk: 1
create_txg: 4
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
--------------------------------------------
LABEL 2
--------------------------------------------
failed to read label 2
--------------------------------------------
LABEL 3
--------------------------------------------
failed to read label 3
The outputs for
zdb -l /dev/da[1-9]
are identical (checked with diff), except for the top-level guid lines in LABEL 0 and LABEL 1 (lines 12 and 109 respectively). I manually verified that the output for each disk corresponds to the correct child disk path/guid. Can post the full outputs if you like, but felt like it would be a bit overwhelming in a first post.IT help desk instincts kicked in, so I tried a simple reboot. Nope, same thing. I even tried running
freebsd-update rollback
and rebooting again without success.Any ideas? Any more information I can provide?