Solved rollback after zfs upgrade possible?

I need to verify something, I've upgraded from FreeBSD 11.2 to 12, seem to be working fine then I noticed there was a new version of ZFS, so I did zpool upgrade -a, however before I did the upgrade, .. I made a snapshot.

Is it possible to do a rollback to a snapshot after a zpool upgrade ?

Something tells me that it isn't, and that's why there's zpool checkpoint but I'm not quite sure.
 
You can't "undo" the upgrade, if that's what you're asking. Snapshots made prior to the upgrade will work as usual after the upgrade.
 
Yes and no, what I'm asking, I want to know if I do a rollback to 11.2 snapshot if it would work. I think the LSI 3008 driver is messed up somehow, not sure what to make of it all my drives seem to fail at this point there's 3 disk ZFS that are failing 3 disks that seems a bit unlikely.

That or the card has broken during the upgrade suddenly. Or metadata got corrupted after zpool upgrade. The only drives that don't seem to be failing are ssd's that are on sata controller.
 
The problem is if you have run zpool upgrade on the pools the version of ZFS that is in 11.2 will refuse to import it because it's a later version. I don't believe there is any easy way around that. You may be able to edit the filesystem ZFS features to manually remove the ones which were added, but that is uncharted territory.
 
I want to know if I do a rollback to 11.2 snapshot if it would work.
Ah, this is more like a sysutils/beadm or bectl(8) type snapshot. Then no, as xtaz explained above 11.2 won't be able to import or use a pool that has a higher version than it can support.

Moral of this story, don't upgrade your pools unless you are absolutely sure you don't need to revert to a previous version of FreeBSD.
 
Yeah I know however at the time I did that I was pretty sure,....

Should also have made a checkpoint,..
 
Depending on the controller type you might want to try switching to mrsas(4). You can switch back and forth between mfi(4) and mrsas(4) fairly easily. It's also safe to do so.
 
PCI-E 8x LSI 9340-I IBM 46C9115 M1215 12GB/s RAID 0/1/10 SATA SAS CONTROLLER is the controller and how exactly do I do that in bsd.

Right: hw.mfi.mrsas_enable="1"
 
Don't want to jump the gun need to properly check doing this from my phone right now but I think that worked.

EDIT:
I still see at this at boot
Code:
ZFS: i/o error - all block copies unavailable                                   
ZFS: can't read MOS of pool depot

Seems like depot pool has some issues is unavailable at the moment, .. scrubbing tank still gives errors but there's no more controller errors, and maybe due to the bad controller earlier it messed up some parts of the drives.
 
Thanks but at this point I will be glad if everything works, not feeling really adventurous at the moment :). But it's good to keep in mind, .. if something isn't working out i can still try upgrading to stable.

I think there's some actual damage done, to the pools i mean at some point scrub started which i hadn't noticed and it repaired 40GB of data which wasn't broken so i figure what did it replace it with.
 
I'm wondering if metadata of a pool is broken would i be able to recover them? The tank pool seems fine but it had ssd cache and log configuration on mirrored ssd drives, so I assume the meta data is on the ssd's for those drives and never got corrupted.

However my pool depot doesn't have an ssd cache/log drive.

So i assume that's why that pool is broken and the other one isn't.
 
Single disk configuration had no means to recover itself, when other driver nothing but errors, left in an irreparable state, in some cases it's better to have no mirror or intelligence.

Because of zfs scrub the other pools are broken this one just works fixing the driver !

now:

Code:
  pool: test
state: ONLINE
  scan: scrub in progress since Thu Jan 31 07:11:07 2019
        2.14T scanned at 151M/s, 1.37T issued at 96.8M/s, 2.36T total
        0 repaired, 58.16% done, 0 days 02:58:32 to go
config:

        NAME         STATE     READ WRITE CKSUM
        test         ONLINE       0     0     0
          gpt/test0  ONLINE       0     0     0

How would one prevent this from happening, bad driver corrupting zfs. I mean there was no indication this wouldn't work 11.2 worked fine upgrade to 12 fucked screwed it up. You would assume same driver configuration would work?
 
Last edited by a moderator:
Has something changed in FreeBSD12 ?

Code:
# zpool status depot
  pool: depot
 state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Recovery is possible, but will result in some data loss.
        Returning the pool to its state as of Wed Jan 30 08:05:05 2019
        should correct the problem.  Approximately 6 seconds of data
        must be discarded, irreversibly.  Recovery can be attempted
        by executing 'zpool clear -F depot'.  A scrub of the pool
        is strongly recommended after recovery.
   see: http://illumos.org/msg/ZFS-8000-72
  scan: scrub in progress since Tue Jan 29 03:04:42 2019
        4.80T scanned at 0/s, 4.80T issued at 0/s, 15.6T total
        284G repaired, 30.87% done, no estimated completion time
config:

        NAME            STATE     READ WRITE CKSUM
        depot           FAULTED      0     0     2
          mirror-0      ONLINE       0     0     8
            gpt/depot0  ONLINE       0     0     8  block size: 512B configured, 4096B native
            gpt/depot1  ONLINE       0     0     8  block size: 512B configured, 4096B native
          mirror-1      ONLINE       0     0     4
            gpt/depot2  ONLINE       0     0     4  block size: 512B configured, 4096B native
            gpt/depot3  ONLINE       0     0     4  block size: 512B configured, 4096B native

# zpool clear -F depot
cannot clear errors for depot: I/O error
# zpool clear -Fn depot
internal error: out of memory
 
For now it's scrubbing nothing i wasn't able to clear it by just running clear, ..

I did zpool export depot && zpool import -F depot after that the pool became available to me again. I'll post the result once it's finished.
 
Back
Top