Solved ZFS upgrade from version 28 to 5000

Hi,
I did upgrade my freebsd box from 10.2 to 10.3 and then to 11.0. All went smooth and without problems.

So last step was to upgrade ZFS. And here are my logs from console how i did it:
Code:
[root@forteca ~]# uname -a
FreeBSD forteca.xxx 11.0-RELEASE-p7 FreeBSD 11.0-RELEASE-p7 #8: Sat Jan 14 23:15:14 CET 2017     root@forteca.xxx:/usr/obj/usr/src/sys/FORTECA  amd64

[root@forteca ~]# beadm list
BE                   Active Mountpoint  Space Created
10.2-RELEASE         -      -          182.8M 2015-09-13 19:45
10.2-RELEASE-torrent -      -          429.3M 2016-05-11 19:26
11.0-RELEASE         NR     /           34.4G 2017-01-14 16:16

[root@forteca ~]# df -h
Filesystem                 Size    Used   Avail Capacity  Mounted on
zroot/ROOT/11.0-RELEASE    139G     21G    118G    15%    /
devfs                      1.0K    1.0K      0B   100%    /dev
linprocfs                  4.0K    4.0K      0B   100%    /usr/compat/linux/proc
procfs                     4.0K    4.0K      0B   100%    /proc
tank                       225G     74G    151G    33%    /tank
zroot/usr-home             159G     41G    118G    26%    /usr/home
zroot/vm                   148G     30G    118G    20%    /vm

[root@forteca ~]# zpool status
  pool: tank
 state: ONLINE
status: One or more devices are configured to use a non-native block size.
        Expect reduced performance.
action: Replace affected devices with devices that support the
        configured block size, or migrate data to a properly configured
        pool.
  scan: none requested
config:

        NAME           STATE     READ WRITE CKSUM
        tank           ONLINE       0     0     0
          gpt/disk1.1  ONLINE       0     0     0  block size: 512B configured, 4096B native

errors: No known data errors

  pool: zroot
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: resilvered 241M in 405738h37m with 0 errors on Thu Apr 14 20:37:16 2016
config:

        NAME           STATE     READ WRITE CKSUM
        zroot          ONLINE       0     0     0
          mirror-0     ONLINE       0     0     0
            gpt/disk0  ONLINE       0     0     0
            gpt/disk1  ONLINE       0     0     0

errors: No known data errors
[root@forteca ~]# zpool upgrade -v
This system supports ZFS pool feature flags.

The following features are supported:

FEAT DESCRIPTION
-------------------------------------------------------------
async_destroy                         (read-only compatible)
     Destroy filesystems asynchronously.
empty_bpobj                           (read-only compatible)
     Snapshots use less space.
lz4_compress
     LZ4 compression algorithm support.
multi_vdev_crash_dump
     Crash dumps to multiple vdev pools.
spacemap_histogram                    (read-only compatible)
     Spacemaps maintain space histograms.
enabled_txg                           (read-only compatible)
     Record txg at which a feature is enabled
hole_birth
     Retain hole birth txg for more precise zfs send
extensible_dataset
     Enhanced dataset functionality, used by other features.
embedded_data
     Blocks which compress very well use even less space.
bookmarks                             (read-only compatible)
     "zfs bookmark" command
filesystem_limits                     (read-only compatible)
     Filesystem and snapshot limits.
large_blocks
     Support for blocks larger than 128KB.
sha512
     SHA-512/256 hash algorithm.
skein
     Skein hash algorithm.

The following legacy versions are also supported:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
 11  Improved scrub performance
 12  Snapshot properties
 13  snapused property
 14  passthrough-x aclinherit
 15  user/group space accounting
 16  stmf property support
 17  Triple-parity RAID-Z
 18  Snapshot user holds
 19  Log device removal
 20  Compression using zle (zero-length encoding)
 21  Deduplication
 22  Received properties
 23  Slim ZIL
 24  System attributes
 25  Improved scrub stats
 26  Improved snapshot deletion performance
 27  Improved snapshot creation performance
 28  Multiple vdev replacements

For more information on a particular version, including supported releases,
see the ZFS Administration Guide.

[root@forteca ~]# zpool upgrade
This system supports ZFS pool feature flags.

All pools are formatted using feature flags.


Some supported features are not enabled on the following pools. Once a
feature is enabled the pool may become incompatible with software
that does not support the feature. See zpool-features(7) for details.

POOL  FEATURE
---------------
tank
      sha512
      skein
zroot
      sha512
      skein

[root@forteca ~]# zpool upgrade -a
This system supports ZFS pool feature flags.

Enabled the following features on 'tank':
  sha512
  skein

Enabled the following features on 'zroot':
  sha512
  skein

If you boot from pool 'zroot', don't forget to update boot code.
Assuming you use GPT partitioning and da0 is your boot disk
the following command will do it:

        gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0

[root@forteca ~]# gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 ada1
partcode written to ada1p1
bootcode written to ada1
[root@forteca ~]# gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 ada0
partcode written to ada0p1
bootcode written to ada0
[root@forteca ~]# zpool status
  pool: tank
 state: ONLINE
status: One or more devices are configured to use a non-native block size.
        Expect reduced performance.
action: Replace affected devices with devices that support the
        configured block size, or migrate data to a properly configured
        pool.
  scan: none requested
config:

        NAME           STATE     READ WRITE CKSUM
        tank           ONLINE       0     0     0
          gpt/disk1.1  ONLINE       0     0     0  block size: 512B configured, 4096B native

errors: No known data errors

  pool: zroot
 state: ONLINE
  scan: resilvered 241M in 405738h37m with 0 errors on Thu Apr 14 20:37:16 2016
config:

        NAME           STATE     READ WRITE CKSUM
        zroot          ONLINE       0     0     0
          mirror-0     ONLINE       0     0     0
            gpt/disk0  ONLINE       0     0     0
            gpt/disk1  ONLINE       0     0     0

errors: No known data errors
[root@forteca ~]# reboot

And after reboot I can't even see bootloader. Just right after choosing option in bios from which disk you want to boot it reboots the system and looping.

Any sugestions how I can recover from this? and fix? What could be possible isse?
 
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0

[root@forteca ~]# gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 ada1

The main issue I can see is that you didn't write the zfs capable bootloader to the disk. If you can get into a live cd you should be able to import the pool and re-run the gpart command to write the gptzfsboot code to the disks. (Or if you boot off a v11 disk that has the same gptzfsboot file, you can just write the bootloader from that without importing the pool). Hopefully that will be enough to get it working.

As long as your pool is functional, getting it to boot shouldn't be too much of an issue. With FreeBSD 11 you don't need the cache file or vfs.root.mountfrom options. Just having a pool containing the FreeBSD system with the bootfs property set, and correct bootcode should be enough.
 
Oh God, I have seen msg from zfs upgrade
Code:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
And i kneew that i used "that" cmd in past.
Code:
gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 ada1
but looks like it was diffrent :(
 
(Or if you boot off a v11 disk that has the same gptzfsboot file, you can just write the bootloader from that without importing the pool). Hopefully that will be enough to get it working.
This did the trick.
I have booted live usb stick with freebsd11 on board and executed:

Code:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
Thank you so much for help!
 
Back
Top