How do you test 13.0-CURRENT?

A few times, lately, I've seen people state they installed FreeBSD CURRENT. I've never done this before and am curious as to where they installed it. If I were to do it, I would use one of my other machines that has nothing on it and build from scratch but I no longer have such a machine to do that with.

I'm sure one can do that in a VM but CURRENT can be unstable without notice. Is that a problem? I wouldn't think so but I don't know.

If I built it in a VM, I'm running UFS on my main workstation. Can I use ZFS in bhyve or vitualbox?
 
I use Bhyve and have a VM with CURRENT-13 installed on a FreeBSD-RELEASE-12.1 host.
I use the official memstick installer image to install it the first time.
What I would do is monitor the mailing list for CURRENT to make sure there are no show stoppers.
CURRENT can be fluid and the mailing list is a good place to feel the pulse.
I don't feel an unstable CURRENT in a VM would affect your host machine.
I use UFS on my Bhyve host but I am sure you could also use ZFS.
 
I can't answer that 'ZFS on Guest' question but I do want to make one point.
The method of networking can vary. I dislike taps and bridges so I always pass thru an ethernet adapter.
It makes setup much easier and you have a dedicated NIC.

As much as I hate UEFI, when it comes to Bhyve it really eliminates a step (bhyveload) and makes things easier.
Section 21.7.4 covers it well. Substitute install.iso with the CURRENT ISO name.
"Booting bhyve Virtual Machines with UEFI"

Edit: PCI-Passthrough requires VT-D so make sure your CPU supports that if you go that route
 
I have this vague memory of doing that with VirtualBox, while trying to learn how to use ZFS. In VBox it was easy. If you're familiar with it, you create a disk before installation. Then, during installation, you can choose the zfs options. (Or even create a disk afterwards, and format it with zfs as you would on bare metal.)

I've not tried that on bhyve, for no real reason save that my bhyve FreeBSD installs are usually to try to learn something, and so far, that something hasn't included ZFS. As a bhyve install (indeed, a bare metal install too), only takes 10 minutes or less, why not give it a try. I can't see it breaking anything. I'm fortunate in that I have a couple of old laptops that I can use for testing too.
 
I would be interested in automated testing, I've got a bunch of server idling around while they could do some fuzzying at night after backups are done. Has anyone got some resources on that for FreeBSD. Maybe there are some guides or scripts around, I don't have that much time to start an own project like the NetBSD GSOC Part1 Part2
 
Yes, you can run ZFS/guest on UFS/host, but:
  • ZFS needs lots of RAM (have to take it from the host)
  • ZFS self-heal feature needs another device in the storage pool (RAID/mirror) to repair (checksum/mismatch)
  • A pool created with only one disk, lacks redundancy.
  • In single disk scenario (guest), corruption can be detected, but it can't get fixed.
Another scenario for ZFS/guest and UFS/host:
  • Suppose you manage to setup some RAID in the ZFS/guest (I don't know how) on a UFS/host.
  • If ZFS/guest become corrupted, guest can get fixed.
  • If UFS/host become corrupted, then it doesn't matter what's your guest FS (ZFS or UFS): host is corrupted => guest is corrupted => ZFS/guest can't get fixed.
In short: In my opinion there's no benefit on using ZFS/guest on a UFS/host system.

ZFS/guest on ZFS/host is different story. But you have benchmark its performance (building the world for example).

Disclaimer: I'm not a ZFS user, I'm a UFS-everywhere.
 
If UFS/host become corrupted, then it doesn't matter what's your guest FS (ZFS or UFS): host is corrupted => guest is corrupted => ZFS/guest can't get fixed.
What I do is this: FreeBSD Virt machine has 'Disk on Module' for OS and Bhyve using UFS2.
VM's live on a pair of NVMe that are formatted GMirror for redundancy.

I can see your point as to passing through two different disks for ZFS. It can be done though. I have even passed through two NVMe to a VM for a GMirror guest. It was not very performant though
.
 
  • Like
Reactions: a6h
You can also dedicate whole real disk to the vm as raw device, and then install zfs on root with efi boot. With this approach, if the host fail or become corrupted, you can easily port the vm without have to think about virtual disk compatibility with other hypervisor. Or you can run the vm system directly as a host, so that you can try to fix the host (the host could also become the next vm).
 
  • Like
Reactions: a6h
I know I don't have to but you guys reminded me that you should use more than one disk with ZFS. Another thing I now remember is that I struggled to get UEFI working on this motherboard. I believe it's now because I didn't get the boot configuration correct but I'm not positive. Ideally, I'd like to get UEFI working cause we're stuck with it going forward but it's a matter of, as my wife said, "Oh groan. Here you go again."

I really never have much in the way of problems installing FreeBSD from scratch. It's just the fear of the unknown and what will I forget about that will make this typical 20-minute exercise into a multi-day learning thing.
 
struggled to get UEFI working on this motherboard. I believe it's now because I didn't get the boot configuration correct
My last round of UEFI installs (FreeBSD and OpenBSD to Samsung NVMe SSDs) have gone fine - mostly Dell or Supermicro - just make sure to select UEFI or EFI everywhere you can (remove DUAL or LEGACY), and use GPT. I had issues trying to make CD/DVD boot media - a USB memory stick seemed to be a lot easier to boot and install from.

Also document everything you changed in case it doesn't work :sssh: and you want to restore the machine configuration to what it was before.
 
It's just the fear of the unknown and what will I forget about that will make this typical 20-minute exercise into a multi-day learning thing.

come on, jump into that cold water! We all know learning something new is cool, and I can tell you knowing how to handle ZFS is rewarding and a real joy!
 
just make sure to select UEFI or EFI everywhere you can
This isn't my first rodeo. This is a six-year old motherboard with a UEFI-compatibility configuration setting I can't remember the name of. I last fiddled with all that about four years ago. Only last week I finally found out why I couldn't get UEFI to work. But reinstalling FreeBSD on this system means my life will be on hold while I learn UEFI and ZFS. I wish I had a second machine to do this on. I'd buy a cheap one but I have a feeling it would then sit in the corner with nothing to do after that.
 
Normally I'd say "emulation!" but with UEFI and ZFS it is time for real hardware to be sure everything is "real". I do have a few machines gathering dust, but there's usually something every now and then I want to try on a spare machine so I try not to think about the gathering dust aspect too much.
 
I run CURRENT on my desktop and laptop. From time to time it breaks, but then I am forced to fix it.

In fact, I have a few patches in src, including hardware support for my HP Spectre and fixing breakage.

If you want to test CURRENT, run it on your desktop/laptop. However, if you are a newbie, run a RELEASE.
 
Yes, you can run ZFS/guest on UFS/host, but:
  • ZFS needs lots of RAM (have to take it from the host)
  • ZFS self-heal feature needs another device in the storage pool (RAID/mirror) to repair (checksum/mismatch)
  • A pool created with only one disk, lacks redundancy.
  • In single disk scenario (guest), corruption can be detected, but it can't get fixed.
Another scenario for ZFS/guest and UFS/host:
  • Suppose you manage to setup some RAID in the ZFS/guest (I don't know how) on a UFS/host.
  • If ZFS/guest become corrupted, guest can get fixed.
  • If UFS/host become corrupted, then it doesn't matter what's your guest FS (ZFS or UFS): host is corrupted => guest is corrupted => ZFS/guest can't get fixed.
In short: In my opinion there's no benefit on using ZFS/guest on a UFS/host system.

ZFS/guest on ZFS/host is different story. But you have benchmark its performance (building the world for example).

Disclaimer: I'm not a ZFS user, I'm a UFS-everywhere.

I know I don't have to but you guys reminded me that you should use more than one disk with ZFS.

I'd counter the above to say it depends.
I happily use ZFS on my laptop that only has a single disk and 4GB RAM. Why? Because I like the security/convenience of boot environments, I like having my work directory structure snapshotted every 5 minutes to roll back mistakes I've made.
Yes it lacks redundancy and data healing. But all the data I care about is backed up and the configuration of the machine is so simple it's easy to replace. Having ZFS just means I probably don't need to re-install in the (super rare) case an upgrade goes badly, and I don't have to go and reach for my backups if I screw up some local data.

In my last job, I was given a Windows laptop which I didn't get on with. I installed FreeBSD in a VirtualBox VM and used it to get work done. That again used ZFS for the same reasons above.

I do use FreeBSD CURRENT, I have it installed in a VirtualBox VM (because I have a need to run it on machines I've already got macOS or Linux installed), and I do have it installed with ZFS. I use this VM for maintaining www/gohugo and sysutils/zfs-snap-diff. After upgrading them in line with upstream, this VM runs ports-mgmt/poudriere to test build them in various jails (all supported -RELEASE, plus -STABLE, and -CURRENT, for both i386 and amd64).

My VM gets upgraded between -CURRENT snapshots with an unofficial tool called mondieu, my upgrade process creates a boot environment so I can go back to a working configuration should a -CURRENT snapshot not be good.
I'm looking at scripting the process to grab down a new snapshot each time and set everything up, it will take longer to get started each time I need to do maintenance but should be a more official approach using FreeBSD produced snapshots.

In theory -CURRENT can be unstable - I've not experienced this during run-time (sometimes I've had it fail to boot correctly). A VM should contain any instabilities and not effect the host machine.
 
In theory -CURRENT can be unstable - I've not experienced this during run-time (sometimes I've had it fail to boot correctly). A VM should contain any instabilities and not effect the host machine.
I agree with you on this one. Regardless of what I have a host or virtual machine, I always have these six (6) FreeBSD/CLI VMs near at hand:
  1. Well-configurated for personal use: build/test, etc (similar to my FreeBSD PC, without GUI)
    RELENG, freebsd-update(8), portsnap(8)

  2. Same as above, but minimal-configurated for reproducing bugs, debugging and testing public questions (forums, etc)
    RELENG, freebsd-update(8), portsnap(8)

  3. Get to know FreeBSD hier/internal at its minimal state published at download.freebsd.org/ftp/releases/amd64/amd64/ISO-IMAGES/*/
    RELEASE (FreeBSD-*-RELEASE-amd64-disc1.iso), ZERO modificatins/configuration/installation

  4. Testing base/head
    CURRENT, compile/svn(1): RELENG, compile/svn: /usr/src, /usr/ports and /usr/doc

  5. Testing base/stable
    STABLE, compile/svn(1): RELENG, compile/svn: /usr/src, /usr/ports and /usr/doc

  6. Testing base/releng
    RELENG, compile/svn(1): RELENG, compile/svn: /usr/src, /usr/ports and /usr/doc
 
ZFS does support increasing copies of files, so this can make the checksum system work on a single storage device, its not a recommended setup but workable. As you said though this wont replace actual redundancy. Also bear in mind if you adjust the copies value for a fileset, it wont make any changes to existing files, it only affects newly written files. For my single disk ZFS setup on my pfSesnse router I use copies=2.

If corruption occurs on the underlying host machine, which in turn will corrupt the guest storage device, ZFS can still fix it, its essentially like a bad sector on a hdd.

ZFS has a reputation for high memory usage but ironically its the easiest to tame, think of windows, linux, ufs file cache system, all of them are barely tunable if at all. I have seen UFS use loads of memory but unlike ZFS you cannot throttle its usage, whilst with ZFS you can cap the size of the ARC.

Personally I dont trust UFS anymore, it seems incredibly fragile on unsafe shutdown's, has no checksum'ing, and no zil, pfSense went through a spell just before they started supporting ZFS of many users reporting corrupted UFS storage. So I guess yeah I would rather have single disk ZFS over UFS. I typically only use UFS now for scratch storage or a quick tester VM.
 
… I dont trust UFS … incredibly fragile on unsafe shutdown…

Agreed! <https://forums.FreeBSD.org/threads/80655/post-515040> I found at least:
  • loss of editions to /etc/rc.conf
  • breakage of two package installations
  • loss of all content of /usr/local/etc/sudoers
There's an essential question: why do FreeBSD-provided disk images not use ZFS?

… ZFS has a reputation for high memory usage …

I like to challenge that preconception. Recently, for fun, OpenZFS + KDE Plasma in a virtual machine with ~1 GB memory
 
UFS bashing is nonsense. I had my share of problems back on 11-CURRENT when SU+J didn't seem to work quite as intended (with broken FS after unexpected poweroff), but I've been told this was fixed. And indeed, I have a VM using UFS for a long time now, and it never had a problem (with several crashes of the host system).

Although there are not many reasons to use UFS, memory consumption is one, performance can be another one, and they're interconnected. With a small ARC, ZFS works, but not with the best performance. There are good reasons for VMs (and therefore VM images): It's simple, proven, performant and needs little RAM. With a VM, you'd expect to have things like redundancy provided by the storage layer below, so that's not really a reason to use ZFS.

As for the original topic: As a ports contributor, you need a -CURRENT machine sooner or later. A full build test of a port includes -CURRENT ;) I always have a -CURRENT in a VM for that purpose (doing all my test builds with poudriere). This VM indeed uses ZFS, just because poudriere profits a lot from it with quick snapshots and clones.
 
As for the original topic: As a ports contributor, you need a -CURRENT machine sooner or later.
Yep, found that out quite quickly. My port was fine on 11 and 12 but failed on 13 (when it was still -CURRENT) due to a newer Clang version. So I set up a VM with -CURRENT just to test and fix the issue.
 
Just want to add: it makes a lot of sense that -CURRENT is required for ports; otherwise, incompatible ports could become "blockers" for a new release. Thanks to backwards compatibility of the kernel, you can just use one -CURRENT machine for all your testing, with builder jails for -CURRENT plus all supported releases. Just maybe, it would be a good idea to describe such a testing environment in the porter's handbook…
Well, sorry for the OT ;)
 
To the opening post, although we're now a version higher:

How do you test 13.0-CURRENT?

As my primary system, for the past few years. TrueOS (based on FreeBSD -CURRENT), 12.0-CURRENT, 13.0-CURRENT and now 14.0-CURRENT.

Everyday use, on hardware (not virtualised).

Use of Windows in VirtualBox is secondary. VirtualBox to test other systems (including FreeBSD, and some based on FreeBSD) is tertiary.

… I'm sure one can do that in a VM …

True, but see below.

but CURRENT can be unstable without notice.

Whilst it's explicitly bleeding edge, I never felt bloodied.

Quoting a manager at Netflix: "… Although it might seem scary to run "development" code in production, we find that it works very well in practice. The FreeBSD development branch is usually quite stable. …".

Is that a problem? I wouldn't think so …

You think correctly. If you choose ZFS you have:

… the security/convenience of boot environments …

– and much more.

… ZFS on my laptop that only has a single disk …

The same here.

I run CURRENT on my desktop and laptop. From time to time it breaks, but then I am forced to fix it. …

Maybe once a year I find a need to boot a previous boot environment.

latest for packages might be perceptibly more of a problem than -CURRENT base but still, I can't describe any of this as so bloody or unstable that it can't be good for everyday use with boot from ZFS. YMMV, depending on use case.

When you have a virtual machine FreeBSD guest@ZFS on a host@UFS, you will very likely benefit from inserting the gsched(8) I/O scheduler (on the host). This will keep the host system responsive during heavy concurrent disk I/O, …

File systems aside: also/alternatively, VirtualBox allows execution to be capped, on the fly (whilst the guest runs) for the same purpose, to keep the host responsive.

1623394161159.png


I guess most of them pick one of the ISO …

Maybe.

I learnt the hard way that it can be much more pleasant to begin with ZFS (e.g. install from an ISO file), than with UFS on FreeBSD-provided disk images.

… I use the official memstick installer image …

I go for ⋯disc1.iso

I might have tried the memstick alternative on a handful of occasions. (Most recently, with computers on which FreeBSD can not be installed; the memstick also failed.)

… But I don't know where they install it. Probably VM. …

FreeBSD 14.0-CURRENT aside: with emulators/virtualbox-ose-additions, to have no risk of kernel panic (in the guest) whilst rc(8) runs scripts at startup time, it may be advisable to:
  • limit the guest to a single CPU.
I have not found time to investigate why some FreeBSD guests (with two or more CPUs) are more prone to this bug than others.
 
  • Thanks
Reactions: a6h
Back
Top