Musings of a noob as I migrate from Windows to FreeBSD in my homelab

OK, so maybe I'm not a complete noob, but a 20+ year hiatus means I definitely don't have any claim to a veteran card - I'll accept the noob label for now. I'm not writing this as a how-to for anything in particular - it's more intended as a list of gotchas and/or points of confusion I encounter during my cutover from a Windows-based homelab to FreeBSD in the closet + Linux clients. Maybe it'll help someone. Maybe not...

Starting point:
  • Supermicro X11DPH-I w/ 128GB RAM & 10 enterprise nvmes: 6x 1.92TB and 4x 960GB.
  • Windows Server 2019 file server, also running Hyper-V
    • Hyper-V guests include a Windows Server 2016 DC and a Windows Server 2013 Exchange server, along with a couple of useless Windows VMs that are only good for testing purposes
  • enough spare parts kicking around that I can cobble together another couple of machines, at need
Where I'm headed:
  • Supermicro-based server will gain an LSI SAS HBA (8i) and 8x 14TB HDDs
  • FreeBSD will run on this hardware
    • the new giant HDD pool (I'm thinking one big pool of raidz3) will allow me to re-rip all my Blu-ray media without compression or scaling compromises and store it in native format
    • VMs will be retired, eventually, in favor of a set of jails running the services needed for a small OpenLDAP + Kerberos auth system, a DNS server, and jellyfin.
    • Maybe a small family mail server to replace the Exchange functionality I have, too, if I'm bored and feeling like a new adventure...
  • A backup/transition server has been assembled using an Intel i7-7700 and Z270 motherboard, along with 1x 500GB nvme and 4x 120GB Samsung EVO 850 SSDs.
    • Long term this will run secondary DNS, LDAP, and Kerberos jails...
    • Short term I'm getting the Windows VMs up and running on bhyve on this machine
I'm just puttering along and don't expect to be "done" for quite some time, but I've done the hardest part - I stopped planning and started doing.

Phase I: migrate Windows VMs so I can tear down and rebuild

  • I installed FreeBSD on the backup machine using the single nvme drive (only 1 m.2 slot on my MSI Z270-A Pro) - no redundancy is a risk for zroot, but I blew my budget on 8x 14TB HDDs. It'll have to do.
  • I created a zpool using the 4 SATA SSDs, configured as a striped pair of mirrors - total reported storage is roughly 220GB, which should be sufficient for VM hosting.
    • I created a dataset called "pool/bhyve" to hold all the VMs under a single snapshottable umbrella
      • Problem #1: vm-bhyve couldn't find the mount point for pool/bhyve when I ran vm init.
      • Noob issue, for sure... by destroying the pool and recreating it with canmount=off and mountpoint=none and then creating the dataset pool/bhyve with an explicit mountpoint=/bhyve, the problem was resolved.
  • Since I wasn't sure I'd have RDP access to ported VMs on first boot, I decided I needed a local VNC viewer. To do this, I needed a window manager. Which display server to use? Xorg is older than I am (actually no, that's a lie, but it's old). I thought Wayland might be fun to try.
    • I followed the FreeBSD handbook for installing Wayland and Wayfire, along with the forum advice to make sure dbus_enable="YES" exists in /etc/rc.conf.
      • Problem #2: don't overthink things. Messing around with the wayfire config file was a bad idea. Just by uncommenting the [output] block it caused Wayfire to launch with a background and nothing else. The handbook makes it look like you need this in wayfire.ini, but that's wrong.
        • After a bit of grief, I decided to start wayfire using an exact replica of wayfire.ini as it exists in /usr/local/share/examples/wayfire. NO changes made. It works!
      • Problem #3: my USB mouse wasn't working... no button action, scrolling, or mouse pointer movement
        • Some more digging, and I stumbled on the solution: delete any reference to moused from /etc/rc.conf. After doing this, the mouse works perfectly.
    • Wayfire now working, I installed Firefox and TigerVNC.
  • FreeBSD now had storage ready to go for VMs and a working window manager, so it was time to try moving over a VM.
    • I decided to use vm-bhyve, since it seems pretty straightforward.
      • I created a public bridge and attached my ethernet interface to it - 20+ years ago this would have been so much harder!
    • I created a copy of the windows VM template and edited for my needs (nvme drive spec), making sure my template lined up with the how-to.
    • I installed the virtio-net drivers in one of my scratch VMs while it was still running on Hyper-V
    • I shut it down, exported it, and copied the hard disk file over to the VM directory on my FreeBSD machine.
      • Problem #4: Attempting to use qemu-img to convert from vhdx to raw failed - while the qemu docs indicate that vhdx is a supported file format, I found otherwise.
        • The solution was to use the Hyper-V disk edit tool to convert the exported vhdx to vhd, and then copy the vhd to FreeBSD. qemu-img was able to successfully convert this to raw format.
    • I deleted the default disk0.img file that vm create put in the dataset, and pointed the .conf file for the machine at the new raw file I'd just created, instead.
      • Curiosity: while an ls -lsa command showed a reasonable file size for the .vhd file that was copied over, the raw file had obnoxiously large file size.
        • This caused a brief moment of panic - did I just use my entire SSD pool for a single file? No... zfs list showed that the data set referred to a reasonable ~18GB of data. To check this I deleted the raw file, and the Refer column went down to ~9GB. Re-create the raw file, and it goes up to ~18GB. The ls command still shows 128GB file size. WTH?
        • Michael Lucas' books on ZFS warned about this, but I forgot... standard file/directory management tools can give wonky results on ZFS! I chose to believe zfs list's Refer column and moved on...
    • OK, all set... I used vm start to fire up the machine, and watched the 4 piddly cores on my i7-7700 get some load for the first time in many years.
      • After a minute or so I connected using TigerVNC, and it works perfectly! I'd venture to say that performance of the VM actually feels snappier running on bhyve than it ever did when it was on Hyper-V on the Supermicro machine (dual Xeon 8156s, to be replaced with ebay special dual Xeon 8260s when I rebuild).
        • Pre-installing the virtio-net drivers was inspired (imho) - all I had to do was go into the network config and give the machine back its static IP, and everything works as it should.
    • That was surprisingly easy, all things considered.
  • I migrated a 2nd VM just because I didn't want to go to bed, yet...
  • Windows shares will be disabled during the rebuild, but that's ok because Windows clients are being replaced, too. I have enough external HDD storage to temporarily store everything important until I have NFS sharing up and running.
Phase II: rebuild the Supermicro machine into my own all-in-one NAS + services box running FreeBSD
  • waiting on a new case to show up that'll fit all my new HDDs...
Phase III: profit
  • rebuild clients with Linux variants
  • decommission Windows VMs
  • add jailed backup services to the i7-7700 machine (DNS, LDAP, Kerberos)
Updates to follow as I continue reintroducing myself to FreeBSD...
 
Incremental update:
  • Phase I is complete - all 4 Windows VMs are now running in bhyve, and folder shares have been backed up and turned off.
    • Painful moment: the Exchange Server 2013 VM fought me. It runs on Windows Server 2012 and was originally created using a much older version of Hyper-V. As a consequence it was using BIOS boot with MBR disks.
Curiosity: while an ls -lsa command showed a reasonable file size for the .vhd file that was copied over, the raw file had obnoxiously large file size.
  • This caused a brief moment of panic - did I just use my entire SSD pool for a single file? No... zfs list showed that the data set referred to a reasonable ~18GB of data. To check this I deleted the raw file, and the Refer column went down to ~9GB. Re-create the raw file, and it goes up to ~18GB. The ls command still shows 128GB file size. WTH?
  • Michael Lucas' books on ZFS warned about this, but I forgot... standard file/directory management tools can give wonky results on ZFS! I chose to believe zfs list's Refer column and moved on...
  • I think I have this figured out... it's two things:
    • qemu-img was working on dynamically expanding .vhd disk files - when converting to raw it just expanded them to the full declared capacity (in most cases that's a lot of free space added)
    • compression=zstd is in my default dataset creation command. I've never actually used a filesystem with compression before - I always assumed the Microsoft "compress disk to save space" option was analogous to "I want to royally screw up this drive RIGHT NOW."
      • The massive difference in used space I'm seeing between ls -lh and zfs list is due to compression. I was expecting something modest, maybe a few percent. With the free space added to the raw disk images in the mix it gives an eye-popping compression ratio.
      • My Exchange server has a disk that's 250GB, and only 23GB is free according to Windows. After moving this to the ZFS dataset for my bhyve-hosted Exchange server, referred space used is 58GB. That's damn good!
  • new case shows up tomorrow, so it'll be time for some hardware fun...
 
The fact that bhyve no longer supports CSM mode has become a problem for me as want to P2V a Windows 10 partition from an old laptop to a new laptop. The only option I see is VirtualBox.

I did try to migrate the Windows 10 partition on MBR to GPT, from CSM to UEFI. It worked for about 15 minutes then blue-screened and blue-screened thereafter.

I have done a P2V from Windows XP to VirtualBox a decade ago. That was relatively straightforward. I may have to do the same with this Windows partition.
 
I did try to migrate the Windows 10 partition on MBR to GPT, from CSM to UEFI. It worked for about 15 minutes then blue-screened and blue-screened thereafter.
Wow that sucks... I guess I got a bit luckier - I performed the conversion last night and this morning the VM is running fine. Have you considered trying to use grub2-bhyve to chainload? There's currently no maintainer for the grub2-bhyve port, but maybe this would work for you in the short term.
 
I loved reading this journal you are doing, thank you for posting it! I started migrating my workstation from Linux to FreeBSD a year ago and I want to start migrating my servers sometime soon. Your migration from Windows to FreeBSD is huge, but I believe you can make it :)
 
Wow that sucks... I guess I got a bit luckier - I performed the conversion last night and this morning the VM is running fine. Have you considered trying to use grub2-bhyve to chainload? There's currently no maintainer for the grub2-bhyve port, but maybe this would work for you in the short term.
With Windows everything is a roll of the dice. I've been lucky professionally in that I have never had to work with Windows or M$ products as my career started out on IBM mainframe 50 years ago and switched to UNIX 30 years ago. But now I'm working on a M$ Defender project and I hate it. M$ documentation is verbose yet non-existent and nothing works as documented or designed. It's a convoluted mess.

Had I been working on M$ products instead of mainframe and UNIX I'd have retired long ago.

Having had looked at Windows sources -- I worked for a company that had licensed them though I wasn't working on that project -- IMO the Windows source was a mess. And the team that was working with it was quite vocal about the state of affairs. It was actually one of those team members who got me interested in UNIX (I was a mainframe systems programmer at the time).
 
Phase II update:
  • new case arrived (Rosewill RSV-L4000U) and all parts have been transplanted
    • Problem #5:The Rosewill rail kit that supposedly works with this case is utter garbage. The rail ears that bolt to the cabinet interfere with the handles on the case, with the result being that the case can't slide all the way back into position.
      • The only solution I could find was to remove the handle assembly, which took the front bezel with it. The case fits in the cabinet now, but without handles or bezel.
    • Problem #6: Poor planning... I thought the GPU I'd purchased for jellyfin transcoding was a single-slot card, but it turns out to be dual-slot. It blocks access to the neighboring x8 slot, which is where I need to put the PCIe-> 2xU.2 adapter card for my Optane drives.
      • My janky solution was to add a riser cable and mount the U.2 adapter in a modded cheap GPU riser mount I found online. The mount was overly large, so I cut it to size and glued a ton of Nd-doped magnets to the bottom so it'd stick to the side of the case securely.
      • newbuild.png
      • The horizontal 2xU.2 card sags a bit - I'm going to shore it up with a stack of Kapton sheeting wedged in between the U.2 card and the two 4x m.2 riser cards underneath it.
      • No issues observed in terms of the connection quality - the Optane drives show up as expected and I can read/write to them without problems.
    • I replaced the low-quality fans (noisy) that came with the case with 3x Phanteks T-30s in the midplane, 2x Noctua NF-P12 fans in front of the drive cages, and 2x Noctua NF-R8s at the back of the case. These are much quieter at 100%, and nearly silent when the system is idling.
      • Problem #7: Fan control is non-trivial. When I set the BIOS to eco-mode for the CPUs the proc fans ramp up and down annoyingly every 10 seconds or so. Just pick a speed.
        • I don't have the solution exactly worked out yet, but I've been playing with a perl script I found on the Truenas forums that provides software PID control for fans in the CPU zone and peripheral zone. It uses sysctl-reported temperatures, so no need for a hardware solution like the Aquacomputer quadro/octo + thermocouples.
          • The downside of using the motherboard fan headers is that apparently Supermicro X9/10/11 series motherboards (and maybe others, I'm not sure) don't allow independent control over the fan headers. All headers with numbers (eg FAN1, FAN2, ...) are controlled as a single group, and there's another fan zone for lettered fans (FANA, FANB).
            • I have FANA/B controlling the front 120mm fans for the HDDs, which means all other fans are "CPU" fans and ramp up/down together according to the whims of the PID loop.
            • I'm leaning towards making the exhaust fans and the midplane fan that blows over the PCIe slots into non-PWM fans, running them at 100% all the time instead. Amazon has Molex to 4-pin adapter cables for $5 for 4 pcs. Then only the T30s that blow over the CPU sockets / RAM and the CPU fans themselves will be in the CPU fan zone... I'll get dynamic CPU cooling and maximal adapter cooling. HDD cooling will stay on a PID loop, as well (separate from the CPUs with longer time constant).
    • Last night I enjoyed the terror of reformatting the 8x 14TB SAS HDDs to 4kn sector size.
      • The 6 Western Digital drives are factory refurbs, and all of them upgraded from 512e to 4kn flawlessly using the WD-supplied wdckit tool running on a live Ubuntu USB.
      • The 2 Seagate X18 drives are new, and one of them failed to format properly.
        • After the failed reformat the drive wasn't bricked - it kept talking to me, thankfully. On the downside, the openSeaChest_Format -d /dev/sd<x> --showSupportedFormats output was showing BS, telling me the drive now only supported 512 and 520 byte sectors.
        • The other Seagate drive formatted just fine to 4kn.
        • After about an hour of floundering around, taking screenshots in preparation for opening a support case with Seagate, and going back over the steps I'd taken with a fine-toothed comb, I came to the conclusion that I'd done nothing wrong and that it wouldn't hurt to just try the format command again. I mean, if the drive was already headed for RMA, who cares?
          • The 2nd format attempt worked, so crisis averted - all 8 drives report 4kn sectors.
  • FreeBSD is installed and I've started prepping the system:
    • Problem #8: The FreeBSD installer doesn't allow GPT labeling of partitions when it sets up zroot (using the guided root-on-ZFS mode, since I couldn't puzzle my way through the manual setup). The end result is a zpool that uses device descriptors instead of my chosen labels.
      • I used all 4 of the 960GB nvme drives in a RAID 10 for zroot. 2 of these are in motherboard m.2 slots, and 2 of them are in one of the PCIe -> 4x m.2 riser cards. That brings up my 2nd issue - the installer grouped the 2 motherboard drives into one mirror and the 2 riser-mounted drives in the other mirror. I have no proof, but my gut says it would be safer to have the mirrors set up with one motherboard drive and one riser drive per pair.
      • I solved both problems at the same time by:
        • zpool detach one drive from the first mirror. Use gpart modify -l to add a label to the freebsd-zfs partition. zpool attach that partition to the 2nd mirror
        • Now I have a mirror with 3 drives, and one with a missing drive.
        • zpool detach one of the original drives in mirror #2, add a label to the partition, and zpool attach that partition back to the 1st mirror.
          • Mirrors now have one drive from the motherboard and one from the riser, just like I want.
        • zpool detach, gpart modify -l, and zpool attach the newly labeled partition back to the same mirror for the remaining two drives, and everything is good: mirrors mix the physical locations and I'm using GPT labels instead of device names to define the zpool.
  • Next step is to set up the rest of the storage:
    • I'm thinking of a single raidz3 using all 8 HDDs, with a RAID 10 using 4 of the 1.92TB nvmes as a special vdev.
      • I'll add a generous small-file cutoff to the special vdev, 2MB, since I have plenty of space and the vdev is oversized for just metadata.
      • All user-facing datasets (home directories, media, other network shares) will live on the raidz3 pool, and since the special vdev has a large small file cutoff most files in user home folders will actually live on the nvme RAID 10.
    • Use the remaining 2 nvme drives (also 1.92TB) in mirror for bhyve vms
    • split the 2x 280GB optane drives into 3 partitions each
      • use matching mirrored partitions as slogs for the 3 zpools: zroot, the unnamed raidz3, and the unnamed bhyve mirror.
  • Vacation looming, so I won't get much further for a bit... once I'm back I'll start tackling services:
    • comments on order of setup welcomed... I'm thinking:
      1. Certificate authority
      2. DNS
      3. LDAP
      4. Kerberos
      5. NFS
      6. some sort of bolt-on monitoring framework (soliciting advice... Cockpit looks great but I think it's Linux only)
      7. jellyfin
      8. other services that are clearly not important, or I'd have an easier time remembering what they are...
 
Why do you need kerberos for ? Never needed it.
I can't articulate a compelling reason, because the truth is I don't really have a need. I just like the idea of exchanging tokens backed by keypairs under my supervision... so, I thought I'd take a stab at getting it to work. I give myself 50:50 odds, and if it works it'll be satisfying. Pretty useless from a practical standpoint, but that won't stop me.
 
Why do you need kerberos for ? Never needed it.

Good question - when do we need kerberos?

Usecase 1: ssh
ssh can use passwords, but you need strong ones, and different ones for each destination. That s annoying, and people will still try to hack it.
Or it can use asymmetric keys. That is better, but now you need to maintain each user@ciient -> server relation individually, copy cipher material around and maintain it somehow. That gets annoying with an increasing number of relations.
With kerberos you would just create each user's credential once in the kdc server, and on every account@server where that user is allowed in, you put that name into ~/.k5login. It then doesn't matter from where the user currently connects. Removal is even simpler, just delete the credential in the server, and access will timeout within hours.
The advantage plays out with an increasing number of users, client machines and server accounts.

Usecase 2: all the way through without password or login
I have four postgres databases, and a single pgadmin4 server from which to access+maintain them. The pgadmin4 server gets connected from a client browser. The user logs into kerberos once and starts the browser. The browser automatically grabs the kerberos credential and sends it inside the http request to the pgadmin4 server. The server doesn't display the login page, access is granted immediately. The server connects all the databases with that same (now forwarded) credential. The server does not need to have any specific database access params configured, the grants come from the user's credential only. The database can then forward these credentials again to some secondary services, as needed.

Also-advantage: passwords and asymmetric keys can be used by everybody who manages to obtain them, from anywhere (unless some firewall restricts it), but the kerberos creds can be limited to some ip-adresses for which they were issued. So even if somebody manages to steal a cred (but it should be only valid for a couple hours anyway), they cannot use it from another place.

Also-advantage: somebody might hack your webserver and grab the file with the database passwords from the application - happened a thousand times already. Cannot happen anymore, as such a file doesn't exist.

But be warned: it's a hassle to set this up initially. There are a hundred things to think about, and few tutorials.
 
This thread had me stop my homelab migrate mid-webserver setup on Linux and go with FreeBSD instead :p and go all-in with ZFS! I like the detailing and might write up a blog post for my set-up at some point.

My NAS is a single 10TB HDD; was single-partition NTFS with a basic Windows share for a while. I offloaded files to other drives, formatted the 10TB, ran a surprisingly simple zpool create NAS /dev/da0 command, and was good-to-go.
 
I can't articulate a compelling reason, because the truth is I don't really have a need. I just like the idea of exchanging tokens backed by keypairs under my supervision... so, I thought I'd take a stab at getting it to work. I give myself 50:50 odds, and if it works it'll be satisfying. Pretty useless from a practical standpoint, but that won't stop me.
I tried, and utterly failed to set up NFSv4 with Kerberos on my server some time ago. I'll be following this with interest.

Why? Because setting up Samba with central auth is ridiculous.
 
With Windows everything is a roll of the dice. I've been lucky professionally in that I have never had to work with Windows or M$ products as my career started out on IBM mainframe 50 years ago and switched to UNIX 30 years ago. But now I'm working on a M$ Defender project and I hate it. M$ documentation is verbose yet non-existent and nothing works as documented or designed. It's a convoluted mess.

Had I been working on M$ products instead of mainframe and UNIX I'd have retired long ago.

Having had looked at Windows sources -- I worked for a company that had licensed them though I wasn't working on that project -- IMO the Windows source was a mess. And the team that was working with it was quite vocal about the state of affairs. It was actually one of those team members who got me interested in UNIX (I was a mainframe systems programmer at the time).
My industry tenure's also about fifty years. My last mainframe contract occurred about forty years ago. Good riddance.

Sun's were fun, but didn't pay the bills. Microsoft makes me money.

Windows source was last viewed by me around the year 2000, when a hacker published it to the Inet. It's insane. The more you look at it the crazier you become.


Phase II update:
  • new case arrived (Rosewill RSV-L4000U) and all parts have been transplanted
    • Problem #5:The Rosewill rail kit that supposedly works with this case is utter garbage. The rail ears that bolt to the cabinet interfere with the handles on the case, with the result being that the case can't slide all the way back into position.
      • The only solution I could find was to remove the handle assembly, which took the front bezel with it. The case fits in the cabinet now, but without handles or
For what it's worth, the Supermicro 825TQ has been a great workhorse for me over the decades.


I tried, and utterly failed to set up NFSv4 with Kerberos on my server some time ago. I'll be following this with interest.

Why? Because setting up Samba with central auth is ridiculous.
NFSv4 without Kerberos is now deployed on one of my networks. But Kerberos was too much for me too. At least for the time being.

Windows works well with NFS. Goodbye SMB, CIFS, or whatever they call it these days. Good riddance.
 
Another (small) update:
  • installed Xorg w/ xfce on the new server. I sort-of tried to get Wayland+Wayfire up and running on it, but ran into problems with drivers for the ancient nVidia GT730 I'm using for console display. The card is too old for the "open source" drivers, and I didn't want to fight with the proprietary drivers not playing well with Wayland. xfce is just fine for those times when I might need to be in front of the machine to troubleshoot the lone Windows VM that will live there.
  • I wrote an rc.d script to start the perl fan control script for Supermicro X9/X10/X11 motherboards as a service - still need to work on this a bit, as it's not entirely reliable (starts up the PID loop on reboot about 50% of the time). That's a detail project I can work on later...
  • CPU microcode set to load on boot, fwiw
  • I thought that enough time had passed that maybe I'd get lucky and my Intel X550-T2 would support SR-IOV, despite reports to the contrary from late 2024 / early 2025. No dice - Intel's product documentation says yes, but FreeBSD says no. The feature doesn't show up in dmesg, even after building the ixl driver from ports with SR-IOV explicitly enabled. So, I yanked the X550-T2 and replaced it with an X710-T2L, which works perfectly. I'll use the X550-T2 in a workstation somewhere.
    • VFs created on both interfaces - I'm attempting to manually load balance in advance, but the truth is I'll likely never get close to saturating either link and could just as easily cram everything on a single interface. Jails will use these VFs directly, which I find more elegant than epair bridging.
      • per-jail VFs allow me to configure per-jail pf rules, which keeps things simple. Lock it all down except the services the jails provide, and restrict those to my local networks. My junky secondary server doesn't have an SR-IOV compatible motherboard, so the jails that live there will be traditional vnet w/ epair. I'm hoping the pf config will still work on a per-jail basis and won't force me to create a single giant master config file that sits on the physical port. I've kind of been ignoring teaching myself how this should work...
  • Set up datasets for jailed services and created a nice clean template jail based on 14.3-RELEASE
    • cloned the template and then promoted the clones to get my thick jails for long-term use.
  • Set up my first two jails:
    • Root X509 CA, configured using this very well-written guide
      • installed on an encrypted dataset using a keyfile that's located on a USB thumbstick (yes I made a backup on a separate USB stick)
      • will spend most of its life shut down with the dataset and encryption key unloaded and the USB drives nowhere near the machine - I'll only fire it up if I suffer a break-in and my intermediate signing CA needs to get trashed and rebuilt.
    • Intermediate X509 CA + SSH CA
      • do I need X509 certs? Absolutely not! What's the point of screwing around, though, if you're not going to just go for it? I'll dabble in creating signed certs for SSL and TLS, because why not?
      • SSH CA signs both host and user keys... I don't see a need for my small setup to have diversified CAs.
        • set up and working: SSH-ing around my budding FreeBSD ecosystem is now certificate-based. Just a small trial balloon before I get into the weeds with Kerberos.
      • Later© I'll go back and add an online CRL daemon for X509 certs. Doesn't make sense to set it up until I actually have some certs floating around that might need revocation.
  • Worthy of Note: I don't recall seeing it in the handbook, and I got a decidedly garbage answer when I Googled the problem (the AI at the top of the page said exactly the wrong thing, and I disproved it mere seconds later):
    • FreeBSD jails need the underlying ZFS dataset to have exec=on in order to start. The default value that my jail datasets inherited from their parent was exec=off. Didn't take long to figure this one out, but it's a gotcha I wasn't expecting after reading multiple how-tos (handbook, etc) on getting jails set up.
    • Side note: while jails can have userland programs that don't exist on the host, I discovered (the hard way) that if I want a kernel module (e.g. pf) to run in the jail, I'd better have it loaded on the host. Trivial and obvious, unless you're new...
  • Next up is DNS
    • planning on using BIND, because that's what I used way back in the 90s. Curious to see how much it's grown since then...
    • Just a few primary zones + resolver configuration shouldn't be too much
      • DDNS is an option, but I'm leaving it for later
      • DNSSEC looks like maybe a nice feature to turn on, but how many nameserver maintainers actually publish certificates with their zones? I'm not sure if it's worth the effort to set my resolvers up to check for a signature... Edit: looks like it's a single line in named.conf: dnssec-validation auto;. May as well give it a try... Edit edit: DNSSEC validation is on by default. Guess I need to catch up with the times...
      • I'll be adding in filter lists later - Unbound is probably easier for this, but I don't see why I can't get BIND to do it, too. A quick scan of the manual shows this is a supported feature.
    • LDAP and Kerberos will have to wait - I also have an infrastructure project to finish. I've been waiting 10 years to get the go-ahead from my wife to punch holes in the walls and pull some better wiring. Now that I have it, I'd better finish before she changes her mind. 1000' of Cat6a + some ducting are waiting for me in the basement, so once my DNS is working and I've migrated existing machines to point at the new name servers, I'll be spending a few weeks pulling and terminating cable.
 
Why do you need kerberos for ? Never needed it.
security/krb5 is not for monitoring, it's an authentication system for connections made by a specific server. Correctly understanding what it does and where in the connection stack it belongs really makes a difference in properly taking advantage of it. Alternatives often include Heimdal libs from ports, bundled, or base.
 
Back
Top