Solved Renaming ZFS pool via zpool import

Hi,

I am trying to rename my backup server ZFS pool from zroot to zback.
The idea was to execute
Code:
zpool export zroot
zpool import zroot zback

When I ran zpool export zroot I ended up with the following error message
Code:
cannot unmount '/': Invalid argument
I tried it via ssh and via single user mode and had the same outcome..

Could anyone please help
 
You can't export that filesystem because it's 'active', you're booting from it. Even in single user mode.

After you renamed it does the system still need to boot from it? Or is it intended to be used as an 'extra' pool and you're going to boot from a different pool?
 
Basically you need to use a rescue CD so that the system isn't active.

But since you're trying to rename the main pool you also need to make sure that you edit /boot/loader.conf to reflect on the new poolname (see the vfs.root.mountfrom property) as well as change the bootfs property on the main pool itself. So something like (edit): # zpool set bootfs=<new pool name> poolname (my bad!).

After that you should be home free.

(edit) Just to be on the safe side though I'd re-install the bootloader using gpart nonetheless.
 
After you renamed it does the system still need to boot from it?
Yes I intend to boot from it.

So to be clear, I need to boot from rescue disk, then run
Code:
zpool export zroot
zpool import zroot zback
followed by # zpool set bootfs zback and edit /boot/loader.conf
Code:
vfs.root.mountfrom="zfs:zback"

Is that correct?
 
No. See also zpool(8). It's property = value. # zpool set bootfs=zback zback.

The rest fully matches. Also: I would definitely re-install the bootloader with gpart after you made all the required changes, just to be safe. It probably isn't needed but it will be a lot safer.
 
Sorry to be naive here but is the FreeBSD CD = Rescue CD?
I loaded rgeinstallation and then single user mode but # zpool list returned nothing
 
btw: sorry for my little too hasty share of the zpool command. Sometimes I share partial commands but I'm obviously not being clear enough about it. Happens when I post without taking the time for it.

is the FreeBSD CD = Rescue CD?
Yeah, force of habit on my end. Basically any boot environment which provides some kind of live system will do. I usually rely on the disc1 ISO image. You can use that to install FreeBSD but also as a live cd / rescue environment.

I loaded rgeinstallation and then single user mode but # zpool list returned nothing
Not sure I follow. You booted with a live cd on the host which contains the pool you wish want to rename?

Then this is logical; zpool list only lists currently active / known pools. Try zpool import instead, that should list all available pools. After that you can perform the actual import / rename.

(edit)

Basically... what I'd do is...

# zpool import -fR /mnt zroot zback, this should import (and rename) your new ZFS pool as zback and then mount it under /mnt. From there on you should be able to access /mnt/boot in order to apply the required changes to loader.conf.

Then the (this time verified) command: # zpool set bootfs=zback zback.

Finally I'd bootstrap the HD again, assuming you plan to boot from this pool. Be sure to use the images in /mnt/boot and not /boot. Because there is no guarantee that your rescue environment will be the same as your installed environment.

Something in the likes of (warning: this is just an example!): # gpart bootcode -b /mnt/boot/pmbr -p /mnt/boot/gptzfsboot -i 1 ada0. You obviously would need to exchange the value of -i and the device for you own environment.
 
When I reboot the server, I am seeing strange things..
# zpool status
Code:
  pool: zback
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: resilvered 70.0G in 1h3m with 0 errors on Fri Jul  1 11:34:23 2016
config:

        NAME                          STATE     READ WRITE CKSUM
        zback                         ONLINE       0     0     0
          raidz2-0                    ONLINE       0     0     0
            ufsid/52a0f5122e690fcap2  ONLINE       0     0     0
            ufsid/52a0f53ee01d7083p2  ONLINE       0     0     0
            ufsid/52a0f56322d7540ap2  ONLINE       0     0     0
            ada3p2                    ONLINE       0     0     0

errors: No known data errors

  pool: zroot
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: resilvered 70.0G in 1h3m with 0 errors on Fri Jul  1 11:34:23 2016
config:

        NAME                          STATE     READ WRITE CKSUM
        zroot                         ONLINE       0     0     0
          raidz2-0                    ONLINE       0     0     0
            ufsid/52a0f5122e690fcap2  ONLINE       0     0     0
            ufsid/52a0f53ee01d7083p2  ONLINE       0     0     0
            ufsid/52a0f56322d7540ap2  ONLINE       0     0     0
            ada3p2                    ONLINE       0     0     0

errors: No known data errors
/boot/loader.conf
Code:
zfs_load="YES"
vfs.root.mountfrom="zfs:zback"
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gpt.enable="1"
kern.geom.label.gptid.enable="0"
I am assuming that I am using the zback pool as it is the one specified in the /boot/loader.conf but how do i delete the old zroot pool?

I didn't run # gpart bootcode -b /mnt/boot/pmbr -p /mnt/boot/gptzfsboot -i 1 ada0 yet

Should I # zpool destroy zroot ?
 
# zfs list
Code:
NAME         USED  AVAIL  REFER  MOUNTPOINT
zback       57.3G  1.63T  54.5G  /zback
zback/swap  2.38G  1.63T  2.38G  -
zroot       57.1G  1.63T  54.2G  /zroot
zroot/swap  2.38G  1.63T  2.38G  -
 
What exactly did you do?

Also: you say you 'rebooted' but what led up to this? I mean, did the server actually reboot and you logged on then issued the commands we see here and that's that or.... something else?

Be very specific, explain the incident as if we were morons. And yes, I use the m word and it's the weekend so bite me but honestly: this is what it takes to get good answers, explain all details and be very specific.

Note: not poking fun or anything else at you fred974 but just being a little direct here. You've been here quite a long time and if I can help then you honestly got it.
 
ShelLuser I can do direct especially when someone is trying to help :)
Before we go too far here, I want to point out that this is a backup server and the current downtime is more annoying to me than anything else..
I can always reinstall a fresh copy of FreeBSD but I'll like to take the opportunity to learn from my mistake.. So this downtime is NOT mission critical as back are send somewere else for now :)

I booted the backup server using the installation CD media and then selected the 'live cd' option. From there I ran the command you gave me zpool import -fR /mnt zroot zback . This successfully renamed the pool but couldn't mount /mnt. (I guess /mnt is not writtable in cd). From there, I sent the # reboot command to the server. Once the server came back, I then rename the pool back to zroot with zpool import -fR zback zroot and done another # reboot.

Then I did the following:
# mkdir /tmp/zback
# zpool import -fR /tmp/zback zroot zback
vi /tmp/zback/zback/boot/loader.conf and set
Code:
vfs.root.mountfrom="zfs:zback"
and # reboot the system.
When the server rebooted, I could see two pool
# zpool status
Code:
pool: zback
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: resilvered 70.0G in 1h3m with 0 errors on Fri Jul  1 11:34:23 2016
config:

        NAME                          STATE     READ WRITE CKSUM
        zback                         ONLINE       0     0     0
          raidz2-0                    ONLINE       0     0     0
            ufsid/52a0f5122e690fcap2  ONLINE       0     0     0
            ufsid/52a0f53ee01d7083p2  ONLINE       0     0     0
            ufsid/52a0f56322d7540ap2  ONLINE       0     0     0
            ada3p2                    ONLINE       0     0     0

errors: No known data errors

  pool: zroot
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: resilvered 70.0G in 1h3m with 0 errors on Fri Jul  1 11:34:23 2016
config:

        NAME                          STATE     READ WRITE CKSUM
        zroot                         ONLINE       0     0     0
          raidz2-0                    ONLINE       0     0     0
            ufsid/52a0f5122e690fcap2  ONLINE       0     0     0
            ufsid/52a0f53ee01d7083p2  ONLINE       0     0     0
            ufsid/52a0f56322d7540ap2  ONLINE       0     0     0
            ada3p2                    ONLINE       0     0     0

errors: No known data errors
At that point, I could ssh to the server and I could see that some backup was arriving in it.

After a while, I stated to get backup failure email and when I check the server, I can see that It is now in a loop of reboot.
I can see the FreeBSD mascot and the option to select single user mode etc..

I can send photo to it later as not at desk right now...
 
Now I see what happened. Hmm, I am aware that /mnt is readonly and I too sometimes use /tmp but last times I tried this the whole mounting scheme worked. Odd, I'll have to test for this myself later on.

Anyway: the problem seems to have been caused by the second re-run. Instead of trying to rename zroot again you should merely have imported zback. So: # zpool import -fR /tmp/zback zback and that's it. I'm even a little surprised that it actually responded to zroot at all and didn't give out an error that this pool wasn't available anymore. This could indicate that something else was already amiss but that's speculation.

So what I'd do right now... Boot from the rescue CD again and after you booted start by running # zpool import. It will now probably list both pools. Import 'm both but... specifically:

Code:
# mkdir /tmp/zback
# zpool import -fR /tmp/zback zback
# zpool import -fNR /mnt zpool
This will import zback in a normal way; your filesystems get mounted and you can fully access it. But it will merely import zpool without mounting anything. So now try: # zpool destroy -f zpool.

There is a risk here of course, but considering that it allowed to create a "shadow pool" I'm convinced that it will also allow you to remove that pool again. After that make sure that you can still access zback and when that's the case you should be fully clear to use zback.

Hope this is going to work out!
 
ShelLuser the server's motherboard failed over the weekend.. So I no longer able to play with it.
I'll like to say thank you for trying to help me :)
I will keed a record of your instruction for a rainy day:)
 
Back
Top