ZFS What am I doing wrong

I am trying to transfer my root file system to a smaller drive:
I am trying to follow this: https://blog.grem.de/sysadmin/Shrinking-ZFS-Pool-2014-05-29-21-00.html

Below are my commands and errors (under bold lines)



Code:
root@vmbsd:/usr/home/pete # gpart destroy -F ada5
ada5 destroyed
root@vmbsd:/usr/home/pete # gpart create -s GPT ada5
ada5 created
root@vmbsd:/usr/home/pete # gpart add -t freebsd-boot -s 512 ada5  
ada5p1 added
root@vmbsd:/usr/home/pete # gpart add -t freebsd-zfs -s 40G -l boot ada5
ada5p2 added

root@vmbsd:/usr/home/pete # gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada5
partcode written to ada5p1
bootcode written to ada5
zpool create -o cachefile=/tmp/zpool.cache zroot40 gpt/boot

root@vmbsd:/usr/home/pete # gpart show ada5                                            
=>       40  156301408  ada5  GPT  (75G)
         40        512     1  freebsd-boot  (256K)
        552   83886080     2  freebsd-zfs  (40G)
   83886632   72414816        - free -  (35G)

root@vmbsd:/usr/home/pete # zpool list
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
storage   476G   358G   118G        -         -     9%    75%  1.00x  ONLINE  -
zroot    56.5G  8.68G  47.8G        -         -    16%    15%  1.00x  ONLINE  -
zroot40  39.5G   678K  39.5G        -         -     0%     0%  1.00x  ONLINE  -

root@vmbsd:/usr/home/pete # zfs snapshot -r zroot@backup
root@vmbsd:/usr/home/pete # zfs send -vR zroot@backup | zfs receive -vFud zroot40

IT SHOWED EVERYTHING TRANSFERING NO ERRORS

Code:
root@vmbsd:/usr/home/pete # zfs destroy -r zroot@backup
root@vmbsd:/usr/home/pete # zpool set bootfs=zroot40/ROOT/default zroot40
root@vmbsd:/usr/home/pete # cp /tmp/zpool.cache /tmp/zroot40.cache
root@vmbsd:/usr/home/pete # zpool export zroot40
root@vmbsd:/usr/home/pete # zpool import -c /tmp/zroot40.cache -R /mnt/usb zroot40

At this point var, usr, tmp and zroot40 appear under /mnt/usb

Code:
root@vmbsd:/usr/home/pete # zfs set mountpoint=/ zroot40/ROOT

Now var, usr, tmp and zroot40 have disappeared from /mnt/usb

Code:
root@vmbsd:/usr/home/pete # cp /tmp/zroot40.cache /mnt/usb/boot/zfs/zpool.cache
cp: /mnt/usb/boot/zfs/zpool.cache: No such file or directory

zpool list shows: zroot40 is mounted at /mnt/usb
zfs list shows it there also

As you probably can tell, I am new to this, I have done searches and cannot figure out what I am doing wrong.
If you can help, please be specific like you are explaining to a child.
Thanks
 
Last edited:
You probably want to pass -u to zfs recv. If you don't, the mount points will remain the same, and then it will try to mount then in locations you already have your original pool's filesystems mounted. E.g., it will try to mount 'zroot40/usr' on /usr, but 'zroot/usr' is already mounted there.
 
You probably want to pass -u to zfs recv. If you don't, the mount points will remain the same, and then it will try to mount then in locations you already have your original pool's filesystems mounted. E.g., it will try to mount 'zroot40/usr' on /usr, but 'zroot/usr' is already mounted there.

Thanks for your reply, your suggestion seemed to fix my first problem.

My second still remains:
I have edited my original post above to show my new results (or lack there of), any suggestions?
I found a post that seem to suggest I need to change this:
zfs set mountpoint=/ zroot40/ROOT to zfs set mountpoint=/ zroot40/ROOT/default
I haven't tried it yet (kinda confused as it is)
On another note, am I not posting correctly? I posted about an issue a week or to ago and got no response. This post only you responded, am I asking dumb questions or going about my posting wrong?
 
I found a post that seem to suggest I need to change this:
zfs set mountpoint=/ zroot40/ROOT to zfs set mountpoint=/ zroot40/ROOT/default
Assuming you're using the conventional naming, yes, zroot40/ROOT/default would be the 'default' boot environment for your zroot40 pool. The reason this is done is so that you can have different boot environments, e.g. you can make a new 'root' (as in, '/') which you could name, for example, <poolname>/ROOT/experimental, and then boot using that as your root instead. I've never used this feature, but some people really like it.

Once you set the option on the correct filesystem, things should work properly.

On another note, am I not posting correctly? I posted about an issue a week or to ago and got no response. This post only you responded, am I asking dumb questions or going about my posting wrong?
To my knowledge everything is working. Some forums are just more active than others, and at this time of the year a lot of people are on vacation and don't check in here.
 
You have got most of all the steps right; and you most likely already have a functional system.
W.r.t this question:
zfs set mountpoint=/ zroot40/ROOT to zfs set mountpoint=/ zroot40/ROOT/default
I haven't tried it yet (kinda confused as it is); zroot or zroot40 AND zroot/ROOT and zroot40/ROOT must all show NONE as their mountpoints. Only zroot/ROOT/default or zroot40/ROOT/default must show "/" as their mountpoint. So the instruction is right.

Zpool with zfs is a complex concept. You may be better off completing the migration process in a single user mode or using a rescue disk. Given that you already have one pool zroot (in one or both disks) and now wanting to migrate it from SRC to DEST - another disk (with the possibility of renaming the migrated pool from zroot40 to zroot), the single user mode/rescue disk will likely be useful here.

I would want to be very meticulous in carrying out this task. Without hesitation, the new/DEST disk will consume more Bytes. The reason is that you are dropping a new pool/dataset on the existing one. The entry vfs.root.mountfrom="zfs:zroot40/ROOT/default in the /boot/loader.conf may be required in addition to "zpool set bootfs=zroot40/ROOT/default zroot40" should you still be booting only into the existing zroot in the new disk. The /var and /usr dirs also get clogged up. The zroot/var and zroot/usr are not mounted; but technically /var and /usr are mounted and accessible via /zroot(40)/ROOT/default. Now that you have the mounted var and usr in /mnt/var and /mnt/usr, you need make sure that they are used with the zroot40 after successfully migrating the pools without one pool using the wrong system directories. And more importantly, you will want to ensure your system uses the standard conventions - zroot, zroot/var, /var, etc - in order not to give problem in the future.

The most part of the procedure is for installing FreeBSD Root on ZFS (with gpt). And given the flexibility that comes with Unix administration, we can modify procedure to achieve our desire. Sometimes, they do not fit in well. If you are transferring ROOT FS to a small drive, why not do a fresh installation, zfs/zap migrate the existing pool (like you have done to zroot40), copy conf files and reinstall all pkgs?

On old pool:
Code:
 pkg info -aq > installedpkgs.txt

On fresh installation:
Code:
mkdir /tmp/oldzroot
mount -t zfs zroot40/ROOT/default /tmp/oldzroot
cp -Rf /tmp/oldzroot/etc/*.conf /etc/
cp -Rf /tmp/oldzroot/usr/local/etc/*.conf /usr/local/etc/
pkg install `cat installedpkgs.txt`

Some schools of thought would suggest 'zfs mount' over 'mount -t zfs' and I prefer the 'zfs mount' but 'mount -t zfs' get the job done for me. With this procedure, you are in full control. You can choose to use a partition for cache against a cachefile, use ZIL, and many more. And you can migrate any other data from the old pool as you wish.
 
Back
Top