Reset FreeBSD server settings rapidly. How?

Hello all!

For now developing DevOps procedures with Perl/REX. Sometimes when server settings killed by wrong approach or code need to reset server settings to default. The question is how to do it rapidly:
-- By creating image of partition? If this is solution - is there way to create image of all partitions in one file?
-- By creating install script and reinstall the FreeBSD from scratch?
-- By something else?

On my workstation using VBox and preinstalled VMs and when killed settings on server - just delete one VM and clone from prepared templates. But how to do it on server without VMs that is very remote from me? At time of reset need to remove everything installed and return system configs to the virgin version. What is the fastest and reliable solution?
 
Take a look at zfs snapshots. Depending on your use case a zfs checkpoint might make more sense.

If the work flow is something like make a bunch of changes, make sure things are good, and then carry on or roll back, I would probably look into the checkpoint. There can only be one checkpoint, unlike clones which you can have many of.
 
Is there any other options but not ZFS snapshots?
What is the best and fastest way to reinstall FreeBSD server from kind off template?
 
How about using jails? There's no easier or faster way to spin up a new, pristine base installation or copies of a pre-defined template...
 
Alexandr Kirilov
Hi,
I don't know if it'll be a full solution for your problem but hopefully this thread will give you some ideas:
 
My steps in such situation
  • save original list of installed packages
  • copy original /etc/ and /usr/local/etc (or tar them)
  • write a script that will stop new services + remove packages not in original list + restore etc folders (renaming old and copying new)
but certainly this is not real restore
 
Is there any other options but not ZFS snapshots?
If ZFS is a problem, there is ufsbe that works with UFS, it tries to mimic Boot Environment for ZFS, but it takes some space and a different partionning so I don't know if it might be okay in your situation. Disk space for rented servers online is not always cheaper.

 
Hello all!

For now developing DevOps procedures with Perl/REX. Sometimes when server settings killed by wrong approach or code need to reset server settings to default. The question is how to do it rapidly:
-- By creating image of partition? If this is solution - is there way to create image of all partitions in one file?
-- By creating install script and reinstall the FreeBSD from scratch?
-- By something else?

On my workstation using VBox and preinstalled VMs and when killed settings on server - just delete one VM and clone from prepared templates. But how to do it on server without VMs that is very remote from me? At time of reset need to remove everything installed and return system configs to the virgin version. What is the fastest and reliable solution?
The tldr is zfs snapshots/clones unless destroying the added content which happens in the background is too much background activity, in which destroy+create pool and restore from an externally stored zfs snapshot has more downtime but is faster though likely still beats rerunning the installer even if fully scripted (xz extraction or downloading content is usually the bottleneck). Or a more thorough look...

Only way I know to create an image as one file "in one step" (not counting piped commands as 1 step) would be something like using dd; you will need to reboot to separate media for reading and writing it. That's about the slowest choce and if doing the whole disk blindly then you need separate boot media.

Yes the installer can be scripted, but will have downtime to rerun the installer. If your machine is bottlenecked on xz extraction speed, maybe zstd or lz packed install data could be written faster but not sure if FreeBSD's installer supports it; pkgbase approach would support zstd as pkg supports zstd. Not sure which other choices like bz or gzip are supported or even run faster than xz decompression but could be worth investigating. This downtime could be avoided with ZFS clones or an already prepared UFS partition you just switch to. Further customizing the install media to remove things you dont use (some drivers, programs, etc) would speed up the reinstall but you will spend a lot more time preparing that media.

If your disk is at least twice as big as you need, you can have a second set of partitions you copy to the first or to 'switch' to, though if switching to them then you are now contaminating them. To switch without the contamination issue, have a 3rd set and use it to overwrite the one you switched from. 3rd as a partition only has an advantage in that you could switch to it directly if needed, but then you would be contaminating it; without that need you can have 3rd clean partition data stored in an archive you restore which could be created from dd (pool/partition must be exported/unmounted, slow, could conflict with 2nd partition if not storing a separately created copy of it too) or better tools liek dump/restore zfs-send/recv, tar, etc.

Otherwise, storing separate partitions as separate files is no big deal and you can save a script that has the partition recreation steps with them if you also want. If it must end up as a single file then you can tar the multiple files into one. There are even ways to append tar or other archive data into the end of a script itself. When not using full disk as a requirement you have other tools that understand the disk and data: ufs dump/restore, zfs send/recv, your favorite tool that offers copying like cp, tar, rsync, etc. Unless slowed down significantly by bad I/O of a fragmented layout, these will be faster than dd.

If you just want to switch the state on the one machine quickly, zfs snapshots would allow you to say, 'ok, now change it to 'that' state. If you need to go forward+back, you have to convert snapshots to clones since rolling back through snapshots destroys the new ones. ZFS boot environments makes use of clones to rapidly bounce between setups from the boot menu. UFS can do that too but its done as a second copy of the data on a second partition if I recall. You could otherwise completely manually maintain copying/restoring such backup partitions and datasets (dump/restore, zfs send/recv, cp, tar, etc.). Specifically ZFS Clones + remove the old state is likely one of the fastest choices you have but the removal step takes more and more (background) time depending how much was different from its previous clone/snapshot state. You need to make sure your workflow leaves that one original snapshot around to always be cloned+switched to.

ZFS checkpoints would be impacting rolling back a whole pool; they are good to create for moments where you think a change could go bad (adding a disk improperly makes it a permament member of the pool in undesired way, new enabled zfs feature had negative performance consequences, etc.) asa the pool will be able to be put back to the state it was before such changes took place. You are allowed only 1 at a time for a pool and if I recall they limit some pool operations from being done so you may find you cannot maintain them longterm for some workflows. Time to rollback I'd presume is either nearly instant (ignores changes after that point in time ever happened) or slow (removes those changes, likely holding up import) but don't remember my test results of trying a rollback.

If you will save multiple similar states then zfs snapshots/clones don't take additional space as long as they were copies of another state instead of uniquely created states from scratch. If things will be unique (or you aren't using zfs send/recv) then compression on an archive may help it take less space. Beyond the usual .tar.xz (or whatever compressor), you can use tools such as archivers/zpaqfranz which can deduplicate files you compress which saves more than any compressor alone ever achieves in such case and can be used for its own snapshot and incremental backup logic too where incremental can be written to the same or a separate archive file and only an index file is needed to properly create the next addition. Loading an archive/dump/recv will not bring the machine back up faster than switching to a clone but that downtime may be shorter than the background task of deleting unused old clone states which again happens in the background of a running system.

I'm sure there are other techniques that could be used, though I don't think anything gets faster unless heading toward dark.intr0 's approach of know exactly what differences you want restroed and only selectively restore them.
 
Alexandr Kirilov
Hi,
I don't know if it'll be a full solution for your problem but hopefully this thread will give you some ideas:
https://forums.freebsd.org/threads/how-do-i-factory-reset-freebsd.86425/

It wasn't clear whether the intention was to remove non-root user accounts and remove the contents of home directories, including /root …
 
It wasn't clear whether the intention was to remove non-root user accounts and remove the contents of home directories, including /root …
Yes, that's right, I think what he has in mind is more a "back to factory settings" including dirs like /home, /var, etc ... which BE can't really provides unless you tweak all the mountpoint to noauto which from my tests will lead to a system barely usable (because you have to mount them manually after boot).
I've tried to include /usr/home in BE it's not convient, snapshots are a real deal for that.
Anyway OP doesn't want ZFS to be part of the solution so ... it has to be something else.
 
Thanks, that reduces my confusion around the manual page.
Good to know that I am not the only one who get confused :)
Jokes aside I really believe that BE is a powerful tool, so powerful that one can be tempted to extend its use for the entire operating system.
Sadly (or not) it wasn't designed for that, the given example in the manual page called "depth" is probably a more realistic extend version of it.
 
Alexandr Kirilov
Hi,
I don't know if it'll be a full solution for your problem but hopefully this thread will give you some ideas:
Have seen this thread. Got this approach in my list of possibilities. Will try it.
 
The tldr is zfs snapshots/clones unless destroying the added content which happens in the background is too much background activity, in which destroy+create pool and restore from an externally stored zfs snapshot has more downtime but is faster though likely still beats rerunning the installer even if fully scripted (xz extraction or downloading content is usually the bottleneck). Or a more thorough look...

Only way I know to create an image as one file "in one step" (not counting piped commands as 1 step) would be something like using dd; you will need to reboot to separate media for reading and writing it. That's about the slowest choce and if doing the whole disk blindly then you need separate boot media.

Yes the installer can be scripted, but will have downtime to rerun the installer. If your machine is bottlenecked on xz extraction speed, maybe zstd or lz packed install data could be written faster but not sure if FreeBSD's installer supports it; pkgbase approach would support zstd as pkg supports zstd. Not sure which other choices like bz or gzip are supported or even run faster than xz decompression but could be worth investigating. This downtime could be avoided with ZFS clones or an already prepared UFS partition you just switch to. Further customizing the install media to remove things you dont use (some drivers, programs, etc) would speed up the reinstall but you will spend a lot more time preparing that media.

If your disk is at least twice as big as you need, you can have a second set of partitions you copy to the first or to 'switch' to, though if switching to them then you are now contaminating them. To switch without the contamination issue, have a 3rd set and use it to overwrite the one you switched from. 3rd as a partition only has an advantage in that you could switch to it directly if needed, but then you would be contaminating it; without that need you can have 3rd clean partition data stored in an archive you restore which could be created from dd (pool/partition must be exported/unmounted, slow, could conflict with 2nd partition if not storing a separately created copy of it too) or better tools liek dump/restore zfs-send/recv, tar, etc.

Otherwise, storing separate partitions as separate files is no big deal and you can save a script that has the partition recreation steps with them if you also want. If it must end up as a single file then you can tar the multiple files into one. There are even ways to append tar or other archive data into the end of a script itself. When not using full disk as a requirement you have other tools that understand the disk and data: ufs dump/restore, zfs send/recv, your favorite tool that offers copying like cp, tar, rsync, etc. Unless slowed down significantly by bad I/O of a fragmented layout, these will be faster than dd.

If you just want to switch the state on the one machine quickly, zfs snapshots would allow you to say, 'ok, now change it to 'that' state. If you need to go forward+back, you have to convert snapshots to clones since rolling back through snapshots destroys the new ones. ZFS boot environments makes use of clones to rapidly bounce between setups from the boot menu. UFS can do that too but its done as a second copy of the data on a second partition if I recall. You could otherwise completely manually maintain copying/restoring such backup partitions and datasets (dump/restore, zfs send/recv, cp, tar, etc.). Specifically ZFS Clones + remove the old state is likely one of the fastest choices you have but the removal step takes more and more (background) time depending how much was different from its previous clone/snapshot state. You need to make sure your workflow leaves that one original snapshot around to always be cloned+switched to.

ZFS checkpoints would be impacting rolling back a whole pool; they are good to create for moments where you think a change could go bad (adding a disk improperly makes it a permament member of the pool in undesired way, new enabled zfs feature had negative performance consequences, etc.) asa the pool will be able to be put back to the state it was before such changes took place. You are allowed only 1 at a time for a pool and if I recall they limit some pool operations from being done so you may find you cannot maintain them longterm for some workflows. Time to rollback I'd presume is either nearly instant (ignores changes after that point in time ever happened) or slow (removes those changes, likely holding up import) but don't remember my test results of trying a rollback.

If you will save multiple similar states then zfs snapshots/clones don't take additional space as long as they were copies of another state instead of uniquely created states from scratch. If things will be unique (or you aren't using zfs send/recv) then compression on an archive may help it take less space. Beyond the usual .tar.xz (or whatever compressor), you can use tools such as archivers/zpaqfranz which can deduplicate files you compress which saves more than any compressor alone ever achieves in such case and can be used for its own snapshot and incremental backup logic too where incremental can be written to the same or a separate archive file and only an index file is needed to properly create the next addition. Loading an archive/dump/recv will not bring the machine back up faster than switching to a clone but that downtime may be shorter than the background task of deleting unused old clone states which again happens in the background of a running system.

I'm sure there are other techniques that could be used, though I don't think anything gets faster unless heading toward dark.intr0 's approach of know exactly what differences you want restroed and only selectively restore them.
Need to be reasoning all of it. SUPER-MEGA-HUGE Thx for this detailed explanation.
 
Back
Top