Default FreeBSD installer datasets on remaining free disk space?

Hello,

I'm trying to create a FreeBSD/Windows dual-boot setup on one of my machines (I know, I know, but I need Windows for certain tasks at my job). I have done this before by installing Windows first, leaving some free space, and then creating zfs-root and zfs-swap slices/datasets by hand in the remaining space when installing FreeBSD. Although this works, it doesn't create the default pool/dataset/slice setup (along with quotas and whatnot) as it would on a standard FreeBSD zfs-on-root setup on an entire disk. How do I go about replicating the same disk layout/datasets as a default FreeBSD install with all the default configs/tuning options/etc? Any specifics (command sequences for example) are highly appreciated because I don't have a lot of experience with zfs aside from the basics.
 
(I know, I know, but I need Windows for certain tasks at my job).
So do I, what's the issue?

I have done this before by installing Windows first, leaving some free space, and then creating zfs-root and zfs-swap slices/datasets by hand in the remaining space when installing FreeBSD.
Yes, that's typically the route I take too. Install Windows first (custom install) and leave some space free on the disk for FreeBSD.
Although this works, it doesn't create the default pool/dataset/slice setup (along with quotas and whatnot) as it would on a standard FreeBSD zfs-on-root setup on an entire disk.
I honestly can't remember how but I have the 'standard' ZFS datasets. I think I did a custom partitioning, created the freebsd-zfs partition and then continued with the installer. But I might have to replay what I did because I really can't remember any more. I'm positive I didn't create all the datasets myself though.
 

I believe that this script is used to create everything via the installer, and once a few years ago I trawled through it and recreated it here for FreeBSD 12.1-RELEASE, but since it's been a few years you should probably go through it and make sure nothing has changed (also I'm not sure if I deviated from anything at all, so please don't take my copy as an exact replica).
 
I have a similar question - let me know if I should create a new thread, but might be a quick one.

I've got FreeBSD 14 on a UFS partition that is 23G of a 500G single disk.

Code:
$ sudo gpart show nda0
=>        40  1000215143  nda0  GPT  (477G)
          40      532480     1  efi  (260M)
      532520    48234496     2  freebsd-ufs  (23G)
    48767016   943185920        - free -  (450G)
   991952936     8262240     3  freebsd-swap  (3.9G)
  1000215176           7        - free -  (3.5K)

I'd like to create a zfs pool using the remaining 450G's, and have been unsuccessful so far. Something like a 'gpart create 48767016 943185920' maybe?

If not, will this require me to physically access this machine and boot up on a usb to create a 450G partition first? Surely there's a way to do this remotely.

Any advise is much appreciated!
 
Freebsd4me I think the easiest thing is to delete the swap partition, maybe recreate it right after the ufs partition, then create a zfs partition using "the rest of the device".
Doing this remotely: be careful.
If the system is booted and using the UFS filesystem, I think (going by memory, don't have references in front of me to give you exact commands):
ssh in, get to root
swapoff the swap partition
gpart delete the swap partition
gpart create a freebsd-swap partition right after freebsd-ufs
gpart create a freebsd-zfs partition using the rest of the device

The gpart create commands should "create a new partition right after the last one taking into account any alignment or specific start directives".
Then you should be able to swapon the recreated swap partition and zpool create on the new freebsd-zfs partition.

in theory I think the -b option on create lets you specify the starting block (48767016 in your case) not sure if the -s (for size) 943185920 would do the right thing.
 
Thank you! I think I can figure out the exact commands to do those steps.
i COULD actually get physical access - in that case, it sounds like using gparted, I would just need to move the swap to the end of the UFS, then create the freebsd-zfs partition on the remaining drive. Right?
 
it sounds like using gparted, I would just need to move the swap to the end of the UFS, then create the freebsd-zfs partition on the remaining drive. Right?
That would probably work, as long as the swap partition is not in use or accessed anywhere (hence the swapoff command I referenced if the UFS system is booted and live)
I'm guessing that you would load gparted on USB and boot/run it from there?
 
Aight - think i figured it out, many thanks guise!

So, I've got this far:

$ sudo gpart show
=> 40 1000215143 nda0 GPT (477G)
40 532480 1 efi (260M)
532520 48234496 2 freebsd-ufs (23G)
48767016 2008 - free - (1.0M)
48769024 8388608 3 freebsd-swap (4.0G)
57157632 943057544 4 freebsd-zfs (450G)
1000215176 7 - free - (3.5K)

Commands I ran, FWIW:

zfs_enable in rc.conf
sudo service zfs start
sudo swapoff -a
sudo gpart delete -i 3 nda0
sudo gpart add -t freebsd-swap -s4G -l gpswap -a1M nda0
sudo gpart add -t freebsd-zfs -a 4k nda0
** confirm fstab still accurate **
sudo swapon -a

I rebooted with cold sweaty palms, and it came back up.
Y'all are aight! I don't care what everyone else says!
 
well...I spoke too soon. It did work - I was unmounting/remounting, running the zpool status commands successfully, wrote some test data to it...then I rebooted.

All gone. Triple checked everything. zfs enabled in rc.conf and also boot/loader.conf. et. al.

I can recreate it following the same steps above, but I have to -f (force) the creating because it tells me my zpool 'may already exist' . After that, again, looks all good until a reboot.

From some documentation I found, it seems it's having issues 'importing' the pool, but I'm unable to find what config file(s) to go check for that. Anyone?

I'll go poke at it with a stick some more today. I wonder if it has anything to do with the fact that I have the freebsd OS installed on a UFS partition, and I'm adding this zfs pool after the fact...
 
I wonder if it has anything to do with the fact that I have the freebsd OS installed on a UFS partition, and I'm adding this zfs pool after the fact...
Should not be an issue as long as the zpool is being used as data not "root" (OS).

So after rebooting, you are successfully in the OS, running on the UFS? What does the command swapinfo tell you?
Exactly what lines do you have in rc.conf and loader.conf? Just want to verify "zfs_load=YES" in loader.conf and zfs_enable=YES in rc.conf.
If you can successfully boot into the OS and the only issue is the new zpool not automatically showing up, then:
reboot, making sure the system is booted on UFS
kldstat | grep zfs make sure zfs.ko is loaded
then what is the output of:
sudo gpart show
zpool importYou should only need to enable zfs in loader.conf, not sure what happens if you try and load the module twice.

The zpool create puts metadata in the partition which is why running create again it's telling you to add -f but if you just do a zpool import it will show you zpools that are available to import.
It should show you the zpool you created, then just type in sudo zpool import whateveryoucalledthepool

zfs list will show the datasets and mountpoints
 
Everything you mentioned checked out and works - that import command did indeed load it and I can see my 'test' file I created - so that's progress!

Question still remains as to why it's not loading this up at boot. Could be a evil conspiracy.

$ swapinfo
Device 1K-blocks Used Avail Capacity
/dev/nda0p3 4194304 0 4194304 0%

$ grep zfs /boot/loader.conf
zfs_load="YES"

$ kldstat | grep zfs
2 1 0xffffffff81f35000 5d5958 zfs.ko

$ sudo gpart show
=> 40 1000215143 nda0 GPT (477G)
40 532480 1 efi (260M)
532520 48234496 2 freebsd-ufs (23G)
48767016 2008 - free - (1.0M)
48769024 8388608 3 freebsd-swap (4.0G)
57157632 943057544 4 freebsd-zfs (450G)
1000215176 7 - free - (3.5K)

$ sudo zpool import
pool: zroot
id: 16084183922043664300
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

zroot ONLINE
nda0p4 ONLINE

$ sudo zpool import zroot

$ df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/nda0p2 22G 6.7G 14G 33% /
devfs 1.0K 0B 1.0K 0% /dev
/dev/nda0p1 260M 1.3M 259M 1% /boot/efi
fdescfs 1.0K 0B 1.0K 0% /dev/fd
procfs 8.0K 0B 8.0K 0% /proc
zroot 434G 24K 434G 0% /zroot

$ ls -l /zroot/
total 1
-rw-r--r-- 1 root wheel 0 Mar 11 19:34 test
$ sudo zfs list
NAME USED AVAIL REFER MOUNTPOINT
zroot 162K 434G 24K /zroot
 
zpool import updates a local cache file of the systems it knows about and has mounted, so when you reboot the system looks at that and automatically imports things
 
Back
Top