This post was about a grub-btx-zfs problem. I have edited it because there are important changes to the initial description.
Origin: I wanted to test grub with FreeBSD's root on zfs, and grub on its own partition. I copied my existing FreeBSD zpool(v28) as-is to the other HDD, but gave it a different pool name. I adjusted all zpool/zfs properties as should be and as described in the http://wiki.freebsd.org pages. I edited loader.conf to specify the changed zpool name and copied the zpool.cache containing the new zpool to boot/zfs. Finally, I installed (from a linux distro) grub to its own partition (ext2fs Inode-256) and edited the config file.
The setup failed to boot because I could not get the root zfs to mount. So I first thought I was doing something wrong with my grub menu specs. I then replaced grub with BTX to test whether the problem was with grub or my pool settings. Sure enough, BTX was also unable to boot. More accurately, kernel and modules booted, but root would not mount. Process would halt with below and ? showing only GPT slices - no pools:
After much drudgery and tinkering I have solved the issue and here is the solution. Normally one would want to make the pool adjustments, then cleanly export the zpool - this is not possible in this solution. My current original system mounts root through fstab mountpoint=legacy and canmount=noauto. I wanted to be able to prevent root zpool conflicts by keeping the noauto property on the root pool of the new system. Apparently zfs does not reset the mountpoint to / after an altroot and herein is the central problem. The zdb command also does not work with an altroot import and gives a "no such file or pool found" error. The zdb error is a documented bug in zfs.
1. You need a separate root to boot from (obviously). A CD, usb (mfsbsd) or another PC with the new HDD attached.
2. Import the pool, then unmount the root zfs (newpool or newpool/root)
3. Now change these 2 properties:
If root is on newpool/root, make sure canmount=off and mountpoint=none for newpool itself.
4. As soon as you issue canmount, all zfs related commands will stop responding and give a library error. This is no surprise since the system has two root mountpints now (as can be seen by # df). My system was also unable to shutdown and I had to do a hard reset (unplug).
5. The new zfs root will mount when you reboot into the second HDD. You can re-adjust properties after you have root zfs mounted as required.
End Result: My solution is not very pretty and I have probably missed some finer points (atleast a chroot approach), but this is what worked and it was quick. I hope that the description will help others who come accross a similar situation.
Origin: I wanted to test grub with FreeBSD's root on zfs, and grub on its own partition. I copied my existing FreeBSD zpool(v28) as-is to the other HDD, but gave it a different pool name. I adjusted all zpool/zfs properties as should be and as described in the http://wiki.freebsd.org pages. I edited loader.conf to specify the changed zpool name and copied the zpool.cache containing the new zpool to boot/zfs. Finally, I installed (from a linux distro) grub to its own partition (ext2fs Inode-256) and edited the config file.
The setup failed to boot because I could not get the root zfs to mount. So I first thought I was doing something wrong with my grub menu specs. I then replaced grub with BTX to test whether the problem was with grub or my pool settings. Sure enough, BTX was also unable to boot. More accurately, kernel and modules booted, but root would not mount. Process would halt with below and ? showing only GPT slices - no pools:
Code:
mounting from zfs:bsd/root failed with error 2
After much drudgery and tinkering I have solved the issue and here is the solution. Normally one would want to make the pool adjustments, then cleanly export the zpool - this is not possible in this solution. My current original system mounts root through fstab mountpoint=legacy and canmount=noauto. I wanted to be able to prevent root zpool conflicts by keeping the noauto property on the root pool of the new system. Apparently zfs does not reset the mountpoint to / after an altroot and herein is the central problem. The zdb command also does not work with an altroot import and gives a "no such file or pool found" error. The zdb error is a documented bug in zfs.
1. You need a separate root to boot from (obviously). A CD, usb (mfsbsd) or another PC with the new HDD attached.
2. Import the pool, then unmount the root zfs (newpool or newpool/root)
# zpool import -o altroot=/mnt newpool
3. Now change these 2 properties:
# zfs set mountpoint=/ newpool (or newpool/root)
# zfs set canmount=on newpool (or newpool/root)
If root is on newpool/root, make sure canmount=off and mountpoint=none for newpool itself.
4. As soon as you issue canmount, all zfs related commands will stop responding and give a library error. This is no surprise since the system has two root mountpints now (as can be seen by # df). My system was also unable to shutdown and I had to do a hard reset (unplug).
5. The new zfs root will mount when you reboot into the second HDD. You can re-adjust properties after you have root zfs mounted as required.
End Result: My solution is not very pretty and I have probably missed some finer points (atleast a chroot approach), but this is what worked and it was quick. I hope that the description will help others who come accross a similar situation.