Hello.
Here as it were started. We decide to upgrade from 8.4 STABLE to 9.3 Releng. I've download a source via subversion, upgrade successfully to 9.3. Reboot. Then we decided to upgrade ZFS pool. Upgrade was successful, but I don't know how to update a boot code. I've been looking for solution on the internet, while server continue working. But a UPS doesn't protect a power failure, so it reboot a server. And now we are in trouble.
Right after update here some string from messages log
and
That why I'm was afraid to run this command, because I'm not was sure whether this could damage or not
Here is a pool status
Here are some more info
And camcontrol devlist
Here is info for zpool
After UPS power failure, I've got on screen
When I booted from Freebsd 10 DVD to console
what I can see: gpart show
Can any one help me please in this situation. I'm stupid did not make even a single copy before upgrade.
glabel list
Here as it were started. We decide to upgrade from 8.4 STABLE to 9.3 Releng. I've download a source via subversion, upgrade successfully to 9.3. Reboot. Then we decided to upgrade ZFS pool. Upgrade was successful, but I don't know how to update a boot code. I've been looking for solution on the internet, while server continue working. But a UPS doesn't protect a power failure, so it reboot a server. And now we are in trouble.
Right after update here some string from messages log
Code:
kernel: ZFS filesystem version: 5
kernel: ZFS storage pool version: features support (5000)
Code:
kernel: GEOM: da0: the primary GPT table is corrupt or invalid.
kernel: GEOM: da0: using the secondary instead -- recovery strongly advised.
kernel: GEOM: da1: the primary GPT table is corrupt or invalid.
kernel: GEOM: da1: using the secondary instead -- recovery strongly advised.
kernel: GEOM: da2: the primary GPT table is corrupt or invalid.
kernel: GEOM: da2: using the secondary instead -- recovery strongly advised.
kernel: GEOM: da3: the primary GPT table is corrupt or invalid.
kernel: GEOM: da3: using the secondary instead -- recovery strongly advised.
kernel: Trying to mount root from zfs:raid-5 []...
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
Here is a pool status
Code:
# zpool status -v
pool: raid-5
state: ONLINE
scan: scrub repaired 0 in 307445734561825859h27m with 0 errors on Thu Feb 12 10:41:09 2015
config:
NAME STATE READ WRITE CKSUM
raid-5 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
da0 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
da3 ONLINE 0 0 0
errors: No known data errors
Code:
# egrep '(^ad.*|^da.*)' /var/run/dmesg.boot
da0 at ahc0 bus 0 scbus0 target 0 lun 0
da0: <SEAGATE ST336607LW 0007> Fixed Direct Access SCSI-3 device
da0: Serial Number 3JA961J600007503TCGJ
da0: 160.000MB/s transfers (80.000MHz DT, offset 63, 16bit)
da0: Command Queueing enabled
da0: 35003MB (71687372 512 byte sectors: 255H 63S/T 4462C)
da1 at ahc0 bus 0 scbus0 target 1 lun 0
da1: <SEAGATE ST336607LW 0007> Fixed Direct Access SCSI-3 device
da1: Serial Number 3JA976N9000075032M8N
da1: 160.000MB/s transfers (80.000MHz DT, offset 63, 16bit)
da1: Command Queueing enabled
da1: 35003MB (71687372 512 byte sectors: 255H 63S/T 4462C)
da2 at ahc0 bus 0 scbus0 target 2 lun 0
da2: <IBM IC35L036UWDY10-0 S23C> Fixed Direct Access SCSI-3 device
da2: Serial Number E3V36YLB
da2: 160.000MB/s transfers (80.000MHz DT, offset 127, 16bit)
da2: Command Queueing enabled
da2: 35003MB (71687340 512 byte sectors: 255H 63S/T 4462C)
da3 at ahc0 bus 0 scbus0 target 3 lun 0
da3: <IBM IC35L036UWDY10-0 S23C> Fixed Direct Access SCSI-3 device
da3: Serial Number E3V2YT8B
da3: 160.000MB/s transfers (80.000MHz DT, offset 127, 16bit)
da3: Command Queueing enabled
da3: 35003MB (71687340 512 byte sectors: 255H 63S/T 4462C)
Code:
# camcontrol devlist
<SEAGATE ST336607LW 0007> at scbus0 target 0 lun 0 (da0,pass0)
<SEAGATE ST336607LW 0007> at scbus0 target 1 lun 0 (da1,pass1)
<IBM IC35L036UWDY10-0 S23C> at scbus0 target 2 lun 0 (da2,pass2)
<IBM IC35L036UWDY10-0 S23C> at scbus0 target 3 lun 0 (da3,pass3)
<HL-DT-ST DVDRAM GSA-H10N JL10> at scbus2 target 1 lun 0 (cd0,pass4)
Code:
# zpool get all raid-5
NAME PROPERTY VALUE SOURCE
raid-5 size 136G -
raid-5 capacity 39% -
raid-5 altroot - default
raid-5 health ONLINE -
raid-5 guid 4451690707634593073 default
raid-5 version - default
raid-5 bootfs raid-5 local
raid-5 delegation on default
raid-5 autoreplace off default
raid-5 cachefile - default
raid-5 failmode wait default
raid-5 listsnapshots off default
raid-5 autoexpand off default
raid-5 dedupditto 0 default
raid-5 dedupratio 1.00x -
raid-5 free 82,7G -
raid-5 allocated 53,3G -
raid-5 readonly off -
raid-5 comment - default
raid-5 expandsize 0 -
raid-5 freeing 0 default
raid-5 feature@async_destroy enabled local
raid-5 feature@empty_bpobj active local
raid-5 feature@lz4_compress enabled local
raid-5 feature@multi_vdev_crash_dump enabled local
raid-5 feature@spacemap_histogram active local
raid-5 feature@enabled_txg active local
raid-5 feature@hole_birth active local
raid-5 feature@extensible_dataset enabled local
raid-5 feature@bookmarks enabled local
raid-5 feature@filesystem_limits enabled local
Code:
ZFS: unsupported feature: com.delphix:hole_birth
ZFS: unsupported feature: com.delphix:hole_birth
ZFS: unsupported feature: com.delphix:hole_birth
ZFS: unsupported feature: com.delphix:hole_birth
ZFS: unsupported feature: com.delphix:hole_birth
ZFS: unsupported feature: com.delphix:hole_birth
ZFS: unsupported feature: com.delphix:hole_birth
ZFS: unsupported feature: com.delphix:hole_birth
zfsboot: No ZFS pools located, can't boot
what I can see: gpart show
Code:
#gpart show da0
=> 63 71687389 da0 MBR (34G)
63 71687389 -free- (34G)
#gpart show da1
=> 63 71687389 da1 MBR (34G)
63 71687389 -free- (34G)
#gpart show da2
=> 63 71687277 da2 MBR (34G)
63 71687277 -free- (34G)
#gpart show da3
=> 63 71687277 da3 MBR (34G)
63 71687277 -free- (34G)
glabel list