Other UFS and ZFS on same drive

I am working on a NanoBSD project. I am planning an unconventional route with separate slices for APP and DATA.
So normal NanoBSD setup with UFS on s1,s2,s3.
Due to the appliance nature of the build I am looking for maximum filesystem protection for my extra slices.

Would ZFS on s4,s5 help any with filesystem resiliency?
Can I use both ZFS and UFS on the same drive?
This might end up on a industrial 4GB SD Card.
s1,s2,s3 take up around 800MB just for safety sake. So lets say I need a 2GB APP slice and 200MB DATA slice.
2GB for our APP slice is way more than I need.
I guess what I am asking is how to best use all the extra space above the NanoBSD setup.
Any tricks or tips. I just learned of interleaving swap. So I have much to learn about Unix. Can I RAID1 partitions?

I realize MBR is probably not going to work with ZFS, but NanoBSD has an UEFI option and I think GPT too.
My other option is to extend the NanoBSD script and use a MemoryDisk for these additional slices/partitions too..
That sounds real messy.
 
Would ZFS on s4,s5 help any with filesystem resiliency?
Only some -- you'll lose everything if your only drive dies; you'll gain additional file content check-summing and consider copies=2.

Can I use both ZFS and UFS on the same drive?
This might end up on a industrial 4GB SD Card.
Check how much writes this SD card can endure. Depends on what volume you plan to write to your data partition, I suppose.

s1,s2,s3 take up around 800MB just for safety sake. So lets say I need a 2GB APP slice and 200MB DATA slice.
2GB for our APP slice is way more than I need.
I guess what I am asking is how to best use all the extra space above the NanoBSD setup.
Any tricks or tips. I just learned of interleaving swap. So I have much to learn about Unix. Can I RAID1 partitions?
I would strongly suggest you to use two App partitions (two root filesystems) so you can do atomic upgrades and rollbacks. My nanobsds use a pair of 1 GB root file systems and 128 MB for configuration (/etc changes). I could even go with smaller rootfs, but see no point to do that; you can't buy flash drives smaller than 8GB these days, anyway. Besides you cannot easily change/increase rootfs size one system is running.

I realize MBR is probably not going to work with ZFS, but NanoBSD has an UEFI option and I think GPT too.
My other option is to extend the NanoBSD script and use a MemoryDisk for these additional slices/partitions too..
That sounds real messy.
I've set-up several nanobsd systems running on commodity servers (SuperMicro ones), from SATA DOMs or USB flash drives, using GPT partitioning, with no issues.
 
Looks like I am using std-x86 now for disk partitioning.
std-x86)
NANO_SLICE_CFG=s3
NANO_SLICE_ROOT=s1
NANO_SLICE_ALTROOT=s2
NANO_ROOT=${NANO_SLICE_ROOT}a
NANO_ALTROOT=${NANO_SLICE_ALTROOT}a

I see both std-uefi and std-uefi-bios
Do you remember your setting to trigger? Looks like its called NANO_LAYOUT
std-uefi-bios)
NANO_DISK_SCHEME=gpt
NANO_SLICE_UEFI=p1
NANO_SLICE_BOOT=p2
NANO_SLICE_CFG=p3
NANO_SLICE_ROOT=p4
NANO_SLICE_ALTROOT=p5
NANO_ROOT=${NANO_SLICE_ROOT}
NANO_ALTROOT=${NANO_SLICE_ALTROOT}

Do I really need both a UEFI and BOOT slice?
 
Are you using the work in the /tools/tools/nanobsd/embedded directory?
I am using the i386.cfg file as a template and modify for amd64 and add packages too.
Really my only glitch is the $rc section of /nanobsd/embedded/common
I just cant get it to generate a correct rc.conf. It misses half my entries.
So I just cleared that whole typical_embedded section to only use touch firstboot.
I use the /nanobsd/Files/ structure to add my own rc.conf.
There is a note there about firstboot startup not working great..
That is my second problem, It wont generate the second image automatically.
grow_fs runs too early before s2 is created and grow_fs only grows s1 when it should pingpong s1 and s2
I turn off grow_fs and made the disk RW for the first boot. For hostapd to work I needed that to be successful.
 
Since I've started with FreeBSD 10.3 (we had a kernel module that have been since ported to 11.x), there were no nanobsd/embedded back then. I sit, read and overrode the partitioning code. nanobsd is written pretty clever, allowing you to hook at specific places in the pipeline or override specific features, just by writing shell functions.

Regarding customisation, I wrote few shell functions and hook them here and there to patch /etc (including population of /etc/rc.conf.d/) and /boot/loader.conf. I've been split between providing a monolith /etc/rc.conf and a set of small configuration files in /etc/rc.conf.d; in the end I chose the latter and left /etc/rc.conf for system's administrator. This way there I can change base configuration with future updates more easily, as there is less risk respective configuration file to have been overridden by the system's administrator.

Finally I wrote a small command-line tool to handle configuration persistence, an alternative to save_cfg, modelled after svn.

I would suggest against using firstboot, as your root file system would be readonly. Any 1st boot customisations you better do in advance, when building the image.

Why do you need grow_fs at all? Why do you need read-write file system to use hostapd (I'm more into backend servers)?
 
...
Due to the appliance nature of the build I am looking for maximum filesystem protection for my extra slices. ...
Would ZFS on s4,s5 help any with filesystem resiliency?
Sort of. If the SD card completely self-destructs, then having two copies of the data on the same SD card won't help. That's obvious. But it will help against read errors, and (given ZFS' checksums) also against undetected corruption. However, I think setting copies=2 would probably accomplish nearly the same thing, and be more flexible. It does accomplish the same thing for the file data. I think the FreeBSD version of ZFS will store multiple copies of the metadata too (Illumos does according to the documentation), but I'm not 100% sure (I only use ZFS, not read the source code).

On normal spinning disks, having RAID-1 to two partitions on the same disk is a really awful idea, for performance reasons: If you write a file, the head of the disk will have to jump back and forth, which makes it really slow. On flash storage, that's not an issue.

Can I use both ZFS and UFS on the same drive?
Sure, why not? The more the merrier.
 
I needed to enable RW for hostapd to start for the first time really to degug some things I missed that it needed.
Now I have the image working RO now(single image only). The last nag was wlan_xauth that I had to add to kernel.
I used NANO_MODULES="pf wlan_xauth amdtemp nctgpio" for some additional modules I needed to add.

I tried building NanoBSD for ZFS with the std-uefi-bios layout but the PCEngines APU2 doesn't boot the UEFI image I created.
So what I really need is GPT on Legacy BIOS for APU2. That should allow me to use ZFS.

The nice thing about the original Nanobsd work is that it is document well. The /nanobsd/embedded folder is not.
https://bsdrp.net/documentation/technical_docs/nanobsd

I need to get the second 'backup' NanoBSD slice going right first. Right now I am only getting s1 and s3(config).
Tonight I plan on single stepping through updatep2 script because I ran it and it failed. No error handling.
That should be enough to add partition s2. I worry the disk partitioning is not right.
 
there were no nanobsd/embedded back then.
You kind of nudged me off the nanobsd/embeddded work with this comment.
I have a working conf file now in about an hour. I forgot there was an alternative.
I was trying to make the common library work and in the end I might as well have wrote my own helper library.
Looks like packages and Files are handled slightly differently. Same concepts.
Using the -w and -b switches too for easy tweaking is wicked.
 
Here is a hook script I use(d) to make a GPT layout on the target device. You might have to customize it, tho.
 

Attachments

  • gpt.txt
    5.3 KB · Views: 197
That work is incredible. The sizing of the partitions and splitting is phenomenal. Very elegant.

Last night I deleted all the sub-directories under /nanobsd except /Files and my new ${NANO_TARGETS}. Still refining my APU2 Wireless Access Point image. I finally figured out why NANO_MODULES was failing on nctgpio module and gpiobus. Turns out to add gpiobus the module name needed a prefix. > gpio/gpiobus
 
Back
Top