like creating a UFS on top of a ZFS volume, although the use cases for such a setup are rather limited.
		
		
	 
Virtualisation brings these challenges, so I don't think they are limited.  However, working out what is sane, and why, is something to exercise the grey cells.
Just last week, I managed to resist using a ufs file system, inside a virtual machine running on a KVM server providing VM storage in qcow2 format, on top of an xfs formatted file system provisioned by the Linux Volume Manager (LVM), with a physical volume presented via iSCSI, using a zvol on a ZFS server.  So ufs, under xfs, under LVM, under iSCSI, under zvol, under ZFS (a FreeBSD VM client on a Linux KVM server using a FreeBSD ZFS storage server).
I did build it. just for fun.  The advantage of this arrangement is that multiple VMs could have storage provisioned from a single (iSCSI based) file system on the KVM server, with local volume management.  I really wanted that because it's both convenient and flexible.
Ultimately common sense prevailed, and I put each VM in native format on a separate iSCSI volume.  So ufs, under iSCSI, under zvol, under ZFS -- much more sensible, but much less convenient to manage.
I'm sure that these sorts of arrangements will become increasingly common.  One suspects that grief awaits the adventurous.
By the way, I have lived in the same rural location or the last decade.  I would say I lose power unexpectedly at least 20 or 30 times a year.  All my UFS file systems are mounted with the default (ufs, local, journaled soft-updates).  I have never had data loss of the type that fsck could not repair.  I have only just started to use ZFS as my default FreeBSD file system type, and still use a UFS root (mirror) on my ZFS server.