poudriere buildhost - building for later releases in VMs with shared poudriere infrastructure.

I usually don't run the latest release branch on my servers. This also applies to my buildhosts which usually aren't dedicated buildhosts but also serve as e.g. fileservers.
While my "daily driver" workstations/desktops are also on the same release as my servers, on less important systems like my laptop or on systems with brand new hardware where I need some driver support, I might run the latest release or even snapshots/beta/RC versions.
As I also build my own packages this leads to a hen-egg-problem: I don't have a buildhost that can build for newer releases and the systems that run those releases are (comparably) slow to finish a bulk buildjob, so doing a "quick update to the next RC" (like now with 14.1-RC1) can easily take 24h+ while the buildhost with dual-socket Xeons with 24+ cores, 256 or 512GB RAM would build the same set of packages in ~3-4h.

Of course one can simply run the newer release in a bhyve VM and install and maintain a full poudriere installaiton + ports in that VM, then copy off the built packages to the server that serves those packages to the clients. This also means you have to maintain build options, ports (and possibly patches) and everything else in that VM and keep it in sync with the buildhost (and possibly other VMs) to have consistent port options and dependencies.
In reality, this won't work. At least not for very long - tried that, got bitten by diverging port options more than once.
It also means you waste (a lot of) storage for multiple (identical) ports trees, distfiles, package sets etc.. and have to copy around the packages after the build finishes, which is also cumbersome. The web interface would also need to have its own webserver and configuration and you end up with multiple status pages instead of one for all builds.


Hence, my current solution is as follows:

On the buildhost I export all poudriere datasets via nfsv4, except ".m" (which is used to mount the build jails and should be kept local), and the jails datasets - because the jail dataset needs to be cloned to create build jails. The /etc/exports on the buildhost looks as follows:

Code:
/usr/local/etc/poudriere.d  -alldirs -maproot=root:wheel fbsd14-buildvm
/usr/local/poudriere    -alldirs -maproot=root:wheel fbsd14-buildvm
/usr/local/poudriere  -alldirs -maproot=root:wheel fbsd14-buildvm
/usr/local/poudriere/data       -alldirs -maproot=root:wheel fbsd14-buildvm
/usr/local/poudriere/data  -alldirs -maproot=root:wheel fbsd14-buildvm
# .m is used for jaildata during builds; otherwise empty. keep it local to the buildvm
#/usr/local/poudriere/data/.m  -alldirs -maproot=root:wheel fbsd14-buildvm
/usr/local/poudriere/data/cache  -alldirs -maproot=root:wheel fbsd14-buildvm
/usr/local/poudriere/data/cache/ccache  -alldirs -maproot=root:wheel fbsd14-buildvm
/usr/local/poudriere/data/images  -alldirs -maproot=root:wheel fbsd14-buildvm
/usr/local/poudriere/data/logs  -alldirs -maproot=root:wheel fbsd14-buildvm
/usr/local/poudriere/data/packages  -alldirs -maproot=root:wheel fbsd14-buildvm
/usr/local/poudriere/data/wrkdirs  -alldirs -maproot=root:wheel fbsd14-buildvm
# jails need to be local on the buildhost because poudriere leverages zfs for creating the jails
#/usr/local/poudriere/jails  -alldirs -maproot=root:wheel fbsd14-buildvm
/usr/local/poudriere/ports  -alldirs -maproot=root:wheel fbsd14-buildvm
/usr/local/poudriere/ports/distfiles  -alldirs -maproot=root:wheel fbsd14-buildvm
/usr/local/poudriere/ports/latest  -alldirs -maproot=root:wheel fbsd14-buildvm
/usr/local/poudriere/ports/quarterly  -alldirs -maproot=root:wheel fbsd14-buildvm
V4: /usr/local fbsd14-buildvm

The last line is needed to have a 'root' directory for the V4 mount - The VM only mounts the /poudriere and /etc/poudriere.d directories, but it seems the directory above *has* to be set as the root for the V4 export. I would have liked it better if only /usr/local/poudriere and /usr/local/etc/poudriere.d were exported, but it seems one can only define one 'root' per remote host with V4 shares(?) and you can't use an empty dummy path as the root.
If this is wrong (I think so...) PLEASE give me a hint on how to fix this. I pulled my hair for the better part of a whole day over this...
Since the network connection (bridge) to the VM is only local, I didn't bother optimizing this for better security and also considered it acceptable to use the 'maproot' option.

This is basically all that needs to be configured on the host (except for the bhyve VM of course, but that's out of scope here)


On the VM I needed some more tweaks:
Poudriere insists on creating all the zfs datasets, so all poudriere datasets that are mounted from the host need to stay unmounted. If you destroy them, poudriere wil re-create and mount them next run.
I chose to simply set their "canmount" property to "off" so they won't interfere and keep poudriere happy as the datasets are still there:
Code:
# zfs list -ro name,mountpoint,canmount zroot/poudriere
NAME                                                         MOUNTPOINT                                                       CANMOUNT
zroot/poudriere                                              /usr/local/poudriere                                             off
zroot/poudriere/data                                         /usr/local/poudriere/data                                        off
zroot/poudriere/data/.m                                      legacy                                                           on
zroot/poudriere/data/cache                                   /usr/local/poudriere/data/cache                                  off
zroot/poudriere/data/images                                  /usr/local/poudriere/data/images                                 off
zroot/poudriere/data/logs                                    /usr/local/poudriere/data/logs                                   off
zroot/poudriere/data/packages                                /usr/local/poudriere/data/packages                               off
zroot/poudriere/data/wrkdirs                                 /usr/local/poudriere/data/wrkdirs                                off
zroot/poudriere/jails                                        legacy                                                           on
zroot/poudriere/jails/FreeBSD:14:amd64                       legacy                                                           on
zroot/poudriere/ports                                        /usr/local/poudriere/ports                                       off

As one can see the datasets that are local to the build-VM ('.m', 'jails' and every jail on that VM) have their mountpoint set to "legacy". This is needed because ZFS mounts *before* any (network) filesystems defined in /etc/fstab during boot, so those datasets would be overlayed by the nfs mounts.

The corresponding entries in /etc/fstab of the VM look as follows:
Code:
buildhost:/etc/poudriere.d          /usr/local/etc/poudriere.d              nfs     rw,nfsv4        0       0
buildhost:/poudriere                        /usr/local/poudriere                    nfs     rw,nfsv4        0       0

zroot/poudriere/jails                   /usr/local/poudriere/jails                      zfs     0       0
zroot/poudriere/jails/FreeBSD:14:amd64  /usr/local/poudriere/jails/FreeBSD:14:amd64     zfs     0       0
zroot/poudriere/data/.m                 /usr/local/poudriere/data/.m                    zfs     0       0

And here's the resulting list of mounted filesystems on the VM:
Code:
root@fbsd14-buildvm:~ # mount
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs)
/dev/gpt/efiboot0 on /boot/efi (msdosfs, local)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/home on /home (zfs, local, noatime, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
buildhost:/etc/poudriere.d on /usr/local/etc/poudriere.d (nfs, nfsv4acls)
buildhost:/poudriere on /usr/local/poudriere (nfs, nfsv4acls)
zroot/poudriere/jails on /usr/local/poudriere/jails (zfs, local, noatime, nfsv4acls)
zroot/poudriere/jails/FreeBSD:14:amd64 on /usr/local/poudriere/jails/FreeBSD_14_amd64 (zfs, local, noatime, nfsv4acls)
zroot/poudriere/data/.m on /usr/local/poudriere/data/.m (zfs, local, noatime, nfsv4acls)

Now one can use all the existing poudriere configuration (present in the mounted /usr/local/etc/poudriere.d directory) as well as the ports. The VM will also write its logs and html files to the common shared directories, hence the web interface configured on the host is being populated with the data from the VM. Ports trees are shared and not duplicated, making management of local changes much easier and the package repository is available from the same location for all clients without (manually) copying around those files and occupying space twice.
I've been building packages on this VM for my laptop through all the 14.1-BETAs and currently a bulk job with 14.1-RC1 is running. So basically 'it works' but I feel this might need some polishing, which is part of why I started this thread.
A lot of this was found out by 'try & error' and working around errors that came up, especially with the NFSv4 mounts. I still get some "No name and/or group mapping for uid,gid:(0,0)" errors at the start of every poudriere bulk which I couldn't figure out from which shared dataset they come from, but builds are working fine - so I simply ignored them for now...

Apart from this being mainly a braindump about getting this to work (and possibly helping someone achieving the same goal), I still have some open questions I couldn't figure out myself:
- can/should the list of exported filesystems be reduced? I'm not entirely sure if it is helpful to share e.g. the 'wrkdir' or 'cache' datasets
- is there a more elegant way to prevent the nfs mounts to overlay the local filesystems without using 'legacy'?
- does poudriere manipulate any other of its datasets during bulk jobs, except the jail dataset which is cloned for each builder? (i.e. do I need to keep any other dataset local to the VM?)
and of course first and foremost:
- is there a more elegant solution to this? I found some (old...) mailing list discussions about adding bhyve-capabilities to poudriere, but it seems this never lead to any actual code...
 
Back
Top