move ZFS installation to another hdd (with minor changes)

Hi guys,

I'm in the process of moving a ZFS installation onto another hdd due to the fact that the current swap partition (4GB) is too small.

ATM, I have a mirror ZFSonRoot (poolname: zroot):
Code:
=>        34  3905945533  mfid0  GPT  (1.8T)
          34         128      1  freebsd-boot  (64k)
         162     8388608      2  freebsd-swap  (4.0G)
     8388770  3897556797      3  freebsd-zfs  (1.8T)

=>        34  3905945533  mfid1  GPT  (1.8T)
          34         128      1  freebsd-boot  (64k)
         162     8388608      2  freebsd-swap  (4.0G)
     8388770  3897556797      3  freebsd-zfs  (1.8T)
and the goal is to increase the swap to 32 gigs (space allocated from recreating the freebsd-zfs partition smaller by 28 GB and extending partition2 (the swap)).

I tried some stuff but ran into a wall after the following steps (all done in VBox):
Code:
# backup
zfs snapshot -r zroot@1

# Resize the swap partition
0) ls -la /dev/gpt                                      # get the GPT labels
1) ls -la /dev/label/                                   # get the Glabels
2) swapoff -a
3) gmirror remove swap swapmfid1
4) zpool offline zroot gpt/mfid1
5) zpool detach zroot gpt/mfid1
6) gpart delete -i 3 mfid1                             # leave partition 1 alone
7) gpart resize -s 32G -i 2 mfid1
8) gpart add -t freebsd-zfs -l mfid1 mfid1

# Create zfs datasets on the "new" hdd
- zpool create zroot1 /dev/gpt/mfid1
- zfs send -R zroot@1 | zfs receive -F -d zroot1
- zpool export zroot1 && zpool import zroot1
Basically, I have the same instalation on both hdd's. So, I go ahead and reboot, the system goes well, but I get "cannot mount from ZFSroot zroot". This is the point I don;t understand since it is not supposed to happen. In theory, the hdd's should be the same and therefore, boot. In reality, something didn;t go as planned.


Ideas ?
 
Ah, forgot to copy the zpool.cache. Ok, did that now, but got the same results.
 
You probably forgot something else as well, some hints:

Code:
zpool set bootfs=zroot zroot (on the new pool)
zfs set mountpoint=legacy zroot
zfs set mountpoint=/tmp zroot/tmp
...

Hope this helps.
 
I did not set those ones because if I copied over a snapshot from the other hdd, I expected it to simply ... work.
Ok, it didn;t, so I did set those options .... but getting "cannot mount from zroot1".

My questions is, did anyone actually do this ?
 
da1 said:
I did not set those ones because if I copied over a snapshot from the other hdd, I expected it to simply ... work.
Remember that you don't just copy snapshots here. You create a new pool and then receive your old one there.
da1 said:
Ok, it didn;t, so I did set those options .... but getting "cannot mount from zroot1".
You probably misspelled something... have you changed /boot/loader.conf?
Code:
vfs.root.mountfrom="zfs:zroot1"
da1 said:
My questions is, did anyone actually do this ?
Many times !
 
In addition to what gkontos said, you also should use altroot. You can't really be sure you modified the right /boot/loader.conf when two are mounted on top of each other (same mount points). As an example, here are my notes from when I copied a bootable USB disk to a hard disk. Ignore the parts about zfs snapshot, send and recv, since that is not what you are doing.

Code:
zpool import -o altroot=/z -o cachefile=/tmp/zpool.cache zrootusbcopy
zfs umount -f zrootusbcopy
zfs set mountpoint=/ zrootusbcopy
(notice here I use / rather than /z; because /z is automatically added, 
but is gone when I reboot or reimport without altroot)

zfs snapshot -r zrootusb@snapforcopy

(before send, make sure zrootusbcopy is really mounted with -o altroot=/z or 
it will inherit the mountpoint=/ from the send, and screw up the running system)
zfs send -R zrootusb@snapforcopy | zfs recv -F -d zrootusbcopy

cp /tmp/zpool.cache /z/boot/zfs/zpool.cache
(As you can see I have not exported the pool, which is mandatory. 
Also note I left the mount point as "/" rather than "legacy", which is optional. 
At this point, I rebooted and removed the original to test.)

Since your error message says "cannot mount from ZFSroot zroot" rather than "zroot1", it is apparently reading "zroot" from /boot/loader.conf (or maybe another place, such as boot.config in the root of the zpool, which I don't know anything about) rather than the one you (should have) modified (/boot/loader.conf from zroot1 not from zroot).

One more thing to do to prevent a horrible problem, although not relevant based on your information, is to be sure that your bootable system is the first zfs slice on disk. I am told this is a well known problem, but not well documented. For example, if slice 1 was boot, slice 2 was the cache device for a zpool called "somethingnonbootable" and slice 3 was the bootable zpool called "zroot1", the error message on boot up would actually say something about "somethingnonbootable" instead of "zroot1", without ever trying to read the correct name from the loader.conf.
 
The missing piece of the puzzle was [cmd=]zpool import -o altroot=/zroot1 -o cachefile=/tmp/zpool.cache zroot1[/cmd] In the end, it came out to be like this:

Code:
# backup
zfs snapshot -r zroot@1

# Rebuild the 2nd hdd
0) ls -la /dev/gpt                                      # get the GPT labels
1) ls -la /dev/label/                                   # get the Glabels
2) swapoff -a
3) gmirror remove swap swapmfid1
4) zpool offline zroot gpt/mfid1
5) zpool detach zroot gpt/mfid1
6) gpart delete -i 3 mfid1                             # leave partition 1 alone
7) gpart resize -s 32G -i 2 mfid1
8) gpart add -t freebsd-zfs -l mfid10 mfid1

# Create zfs datasets on the "new" (2nd) hdd
- zpool create zroot1 /dev/gpt/mfid1
- zfs send -R zroot@1 | zfs receive -F -d zroot1
- zpool export zroot1
- cd /tmp
- zpool import -o altroot=/zroot1 -o cachefile=/tmp/zpool.cache zroot1
- zfs set mountpoint=/zroot1 zroot1 (will create "zroot1" under "/zroot1")
- edit /zroot1/zroot1/boot/loader.conf and modify "vfs.zfs.mountfrom=zfs:zroot" to "zfs:zroot1"
- cp /tmp/zpool.cache /zroot1/zroot1/boot/zfs/
- zfs set mountpoint=legacy zroot1
- zpool set bootfs=zroot1 zroot1
- "halt -p" and remove "old" zroot disk (boot from secondary)

# Rebuild the first hdd
- gpart delete -i 3 mfid0
- gpart resize -s 32G -i 2 mfid0
- gpart add -t freebsd-zfs -l disk00 mfid0
- zpool attach -f zroot1 gpt/mfid1 gpt/mfid0

# Rebuild the swap mirror

- gmirror unload
- gmirror label -vb round-robin swap /dev/gpt/mfid1p2
- gmirror insert swap /dev/gpt/mfid2p2

So far, I haven't touched the production machine ....
 
  • Thanks
Reactions: sim
Ow well, the nice server doesn't know how to boot from the 2nd disk (only knows how to boot from the raid-controller).

I will try to send a ZFS snapshot to another machine, reinstall this one, and receive the snapshot back (and then rollback).
 
Very handy guide, I had similar problems, due to missing the
Code:
zfs set mountpoint=legacy zroot
bit.

Just one thing, I think the set bootfs line should be:

Code:
[B]zpool[/B] set bootfs=zroot1 zroot1

(not "zfs")

One other general point I'm not clear on: Why it is necessary to copy the tmp zpool.cache file? I was under the impression that the cache file is purely for speed-up during import, i.e. if it's missing it'll just take longer to probe the devices, but it will still work. On that basis I had just been deleting the migrated (old) cachefile on the assumption that it would be recreated for the new device when the next import occurs (e.g. at boot).

Is that not true?

sim
 
It's normally just a speed up but in case of booting from the ZFS pool it's the only source of information for the kernel to know which devices make up the ZFS pool it's about to mount the root file system from.
 
sim said:
Why it is necessary to copy the tmp zpool.cache file? I was under the impression that the cache file is purely for speed-up during import, i.e. if it's missing it'll just take longer to probe the devices, but it will still work.

From what I've read, it "should" be created and just boot slower, and would be on boot with Solaris, but FreeBSD at the time didn't support it in the bootloader, and relied on the .cache file. I have no idea if this is still true. I haven't tested it since 8.2-RELEASE.
 
If the boot loader (/boot/gptzfsboot) was capable of probing the disks and figuring out the pool configuration based on just the on-disk metadata the /boot/zfs/zpool.cache file wouldn't be needed when booting from a ZFS pool.
 
Back
Top