Cloning a live FreeBSD system for easy disaster recovery

Is there any potential long term risk with Oracle reverting ZFS to closed source and OpenZFS developing independently?
As with any open source project, there is always the long term risk that the project in question might be abandoned some time in the future, due to lack of resources or interest.
My view (FWIW): OpenZFS has less chance of getting abandoned than FreeBSD, because it is used by more projects.
If you are planning for the long term, it is wise to think about / have a plan for what you would switch to if your favorite open source project dies.
 
Is there any potential long term risk with Oracle reverting ZFS to closed source and OpenZFS developing independently?

Oracle ZFS has already turned into closed source. OpenZFS lives now independently so that the both versions are no longer compatible.

Like others, I may point out that running FreeBSD on top of ZFS makes administrations tasks, i.e. regular backups, easier.
 
I use rsync. Here's my backup script that is run by cron every hour. It first checks if another rsync process is running. If not, it aborts. The result is written to a log file so I could make sure the backup is made.

Code:
#!/bin/sh
# This script is launched by cron every hour to backup the entire file system to a Raspberry Pi.
# /etc/ssh/sshd_config "PermitRootLogin yes" must be set on the remote box.

if ! pgrep -x "rsync" > /dev/null
then
    echo "`date` Initiating backup..." >> /home/USERNAME/scripts/last-raspberry-backup
    rm /home/USERNAME/scripts/last-raspberry-backup.log
    sshpass -p "PASSWORD" rsync --log-file=/home/USERNAME/scripts/last-raspberry-backup.log \
    --archive --hard-links --delete --sparse --xattrs --numeric-ids --acls --progress \
    --exclude=/usr/home/USERNAME/.cache --exclude=/dev --exclude=/tmp --exclude=/media \
    --exclude=/mnt --exclude=/proc --exclude=/var/cache --exclude=/compat/linux/proc \
    --exclude=/usr/home/USERNAME/.gvfs \
    --exclude=/usr/home/USERNAME/share \
    --exclude=/usr/home/USERNAME/.local/share/Trash \
    --exclude=/var/db/entropy \
    --bwlimit=1000 \
    / root@192.168.0.22:/mnt/freebsd-backup/
    echo "`date` rsync code $?" >> /home/USERNAME/scripts/last-raspberry-backup
else
    echo "`date` rsync running..." >> /home/USERNAME/scripts/last-raspberry-backup
fi

Re-opening this thread to say a big thank you. This saved my life after I accidentally overwrote a couple of Gb from the start of the boot drive. Had different /usr /var and /tmp partitions, so the damage was solely to the root filesystem. Which could not boot anymore.

Just in a case some careless fella bumps into the same issue, what I did was:
  1. Booted with a Freebsd livecd (same version, 11.3 in my case)
  2. System would not "see" my partitions. I fired gparted /dev/ada0 which threw an error that partitions were possibly corrupted asking me to select between setting either an MBR partition or a GPT one. I selected GPT first without comitting changes in gparted! I elected to just display partitions and there they were all partitions I had. So I thought I'm good and committed this info.
  3. Repaired the overwritten boot loader with instructions in https://forums.freebsd.org/threads/how-to-restore-boot-loader.62390/post-360340
  4. System would still (obviously) not boot. So I figured I had to recreate partition ada0p2 (the first working partition that is). To do so, I formatted the partition with newfs -U /dev/ada0p2
  5. Now, here's where it got tricky. I could not run the rsync command to recover the files that were backed up with the script above. So I made a new install on a fresh disk. The old one, the one I was trying to recover, became /dev/ada1
  6. After this "minimal" install, I used pkg install rsync.
  7. Mounted /dev/ada1p2 as /tmp/old
  8. Used rsync to restore whatever was there.
  9. Removed the disk I made the fresh install on, to try and boot from my recovered disk
The system would boot to a point throwing an error:
Code:
mountroot: unable to remount devfs under /dev (error 2)
mountroot: unable to unlink /dev/dev (error 2)

Not an expert on FreeBSD and time was pressuring me. So I booted up the livecd version, mount ada0p2 and copied the entire /dev directory from the LiveCD to the disk. Rebooted and this time I overcame that error. There were a couple of errors regarding non-existing mountpoints (/usr /var and /tmp) for which I just make the specific directories and rebooted.

And it booted like a charm. Praize FreeBSD!

blackhaz mate thanks a zillion again! :)
 
My advice is just for the future - with ZFS your task would be easy. In your position I would strongly advise to build a new system with ZFS.
Any howto's describing this approach? Would ZFS protect me from disk failures on plain hardware with single-disks? Or are you proposing Freebsd running on top a ZFS RAID?
 
Any howto's describing this approach? Would ZFS protect me from disk failures on plain hardware with single-disks? Or are you proposing Freebsd running on top a ZFS RAID?

Hello!

With ZFS I just cloned my FreeBSD laptop with ease. (now writing this message on that very machine) -

  1. took a fresh (and bigger) hard drive and connected to the laptop via USB adapter;
  2. the system recognized this drive as /dev/da0, boot device was /dev/ada0
  3. manually partitioned new USB drive with gpart;
    1. gpart add -t efi -s 200M -a 4K da0
    2. gpart add -t freebsd-swap -s <my_new_swap_size>G -a 4K da0
  4. created bigger freebsd-zfs partition with gpart add -t freebsd-zfs -s <my_new_zpool_size>G -a 4K da0;
  5. installed boot code with gpart bootcode -p /boot/boot1.efifat -i 1 da0;
  6. added new zfs partition to the existing pool with zpool attach zroot /dev/ada0p3 /dev/da0p3;
  7. allowed the system to resilver that mirror, cheking progress with zpool status;
  8. shut down, removed old disk, installed new disk which was connected to USB during resilver;
  9. booted up from new disk and removed old device from mirror with zpool detach zroot <old_drive_id>;
  10. now writing this message on a new system with bigger capacyty and new drive, having old bootable system in my drawer.
 
Thanks Argentum . Now, if I understand correctly, one has to make the initial install on ZFS, before being able to clone a disk using your instructions, correct?
 
Thanks Argentum . Now, if I understand correctly, one has to make the initial install on ZFS, before being able to clone a disk using your instructions, correct?

That is my belief that today it is a good idea to make most FreeBSD installs on ZFS. My personal experience is good. With ZFS it is easy to clone, snapshot and maintain the system. My previous post is just one example how I replaced the hard drive in my FreeBSD laptop.
 
Any howto's describing this approach? Would ZFS protect me from disk failures on plain hardware with single-disks? Or are you proposing Freebsd running on top a ZFS RAID?
Since you wrote you have a cold backup machine, you can very easily backup to that system with zfs send/receive; that's a reasonable solution even for single disk systems. ZFS offers some additional level of data protection via checksums (detects corruption where traditional filesystems don't). When you reinstall, please also strongly consider to rework your firewall setup. A firewall is not a single machine, but a network setup. A firewall where the packet filter is not on it's own separate physical box (ideally managed through console access only, or - less secure - via a physically separate management LAN), does not deserve it's name. Consider OPNsense or pfSense & have a look at their hardware recommendations (refurbished if costs matter). Additionally, ZFS makes it easy to run vulnerable services jailed (in a DMZ host). On fear about ZFS development/maintainance beeing stalled: there's so much demand by major players throughout the industry, that this will not happen. It's more likely that Oracle drops SPARC hardware & Solaris, before OpenZFS dies.
 
Back
Top