Our backups are zfs snpashot-based, so recovery is pretty straightforward, and with configurations in git-repos we always have another path for recovery if snapshots should somehow fail.
Could You be so please explain how to organize automated backups in case we need “current snapshot” of whole system ?

Where better to read about ZFS snapshotting (with real working examples)?
 
Some workplaces maintain sort of a 'template' image that they can quickly deploy. This may work for user-facing workstations, but servers are a different beast. You may have one box for email, another box hosting fileshares, yet another for firewall, and yet another for DNS. Yeah, they may have some common base (like all running same version of FreeBSD, like 13-RELEASE), and you want to have a good backup scenario so that you can recover quickly and be back up and running. Trouble is, that 'common base' is actually awfully minimal. FreeBSD already provides a bare-bones install image where you just need to turn on SSH and DHCP, but beyond that, an admin would need to maintain a backup copy of friggin' EVERYTHING.
So, the making snapshots - is only solution?. (Especially HDD space are quite cheap nowadays...)

Automating the re-install procedure helps some, but it's still necessary to keep track of what goes where. This is partly why I kind of soured on Puppet and Ansible - Not only you gotta maintain the production stuff, you also gotta maintain a copy of it, which is frankly double the workload, even with those config managers helping out.
I still thinking for really big and geographically spread infrastructure (like 500+ servers, 1000+ net devices) Salt are really better because server-client architecture, when each client tracking changes on server/device and give this info to central mgmt server.
 
With ansible it's only textfiles.
I'm not talking about the config files. You also need to maintain all the data that the config files point to. File backups, specific software versions, ZFS snapshots, etc. Or are you gonna go through the entire process of re-acquiring LibreOffice for FreeBSD 11-RELEASE by taking your chances that the correct versions are still available in public git repos? Even if you can automate things with Ansible or Puppet, there's just no way to avoid maintaining your own archive for quick disaster recovery. And proper disaster recovery strategy really means having 2 or 3 copies in as many different places, to boot. 😩 Gotta think things through, all the way through.
 
I still thinking for really big and geographically spread infrastructure (like 500+ servers, 1000+ net devices) Salt are really better because server-client architecture, when each client tracking changes on server/device and give this info to central mgmt server.
That works a bit different... client devices only track their own changes, and report them back to the management console on the server. Gotta pay attention to how the data even moves.
 
Seed files backed up on a flash disk and/or harddisk storage partition. Having it on both is better. On this install, I simplified my understanding of needed files, and which settings I had stored which weren't essential. Instead of having them all in separate subdirectories, I can have them all in one directory with additional prefixes or suffixes, then figure out where they go later, when I need them.

/etc/rc.conf and /boot/loader.conf go a long way. Then, custom config files in home directories. Crontab files, KERNCONF, and make.conf, if there are any, as well. A lot of old arguments that I had on my old configuration options are obsolete. I can figure out which kinds of settings I really needed.

Source files use dots "." as placeholders for additional directories, which automated scripts translate to the needed slashes "/" and subdirectories. This can be seen in /usr/src/ (if you have it) in reference of bin and sbin.

are you gonna go through the entire process of re-acquiring LibreOffice for FreeBSD 11-RELEASE by taking your chances that the correct versions are still available in public git repos?
A program like that from a dedicated organization will always be there. When that goes away, there will be news about that. Then, some other organization will pick it up, because its license allows it. If a program did become defunct upstream, it would still need security updates from somewhere. If any version is always available, then it should work, or it should work with a production version of FreeBSD anyway. If a repo changes, maintainers and other users of it will address that and try to fix it. I'm not sure if I understood you on this.

edit: in reference to opensource, as LibreOffice is.
 
At my workplace, there are copies of some old software archived. Licenses were bought at the time that software was new. It was a business decision to just keep an old copy of that software for everybody's use. That old version is no longer on the market, and up-to-date stuff - it would be incredibly expensive to keep buying the licenses. This is the kind of stuff that the ansible/puppet config files need to point to. And yeah, it's a lot of work to make sure the server the archive lives on is backed up and maintained on a regular basis.

Even FreeBSD Foundation doesn't exactly go out of its way to announce that, for example, 11-RELEASE is already EOL, and no longer supported. It would not make tech news. I don't expect LibreOffice to make such news, either - MS is about the only organization with a sufficiently large installed base who'd be able to make tech news by declaring that a version of Office is no longer supported (like Office 2007 or 2010). Frankly, even Wolfram Mathematica or SAS, huge companies in their own right, they don't have the clout to make news about dropped support for a specific version.
 
For closed source software, where re-downloading isn't available for free, or where a company may drop support for a software, or a company can go out of business, it always makes sense to have backups. For opensource products supported by a major backer, a lot about that doesn't apply to it.

For a client company, where it's difficult to update many computers to a newer FreeBSD version, keeping backups of packages of opensource products would be relevant. An OpenOffice version is going to be supported, and it may be for any available supported FreeBSD, even if it becomes limited to the latest one in production.

Users have a general idea of when a FreeBSD version is going EOL.

That case may be limited and relevant to short term, especially relevant to bugs and security vulnerabilities. Perhaps like when there was a problem for majorly used pieces of software like Firefox and Thunderbird that wouldn't build properly. People discussed that on mailing lists, bug reports, and forums. A major organization couldn't afford to have that go down for a few days, and they would have a backup of anything that could do, including what you suggested. Vuxml (vulnerability/security) warnings for ports would be another issue, where a backup or also ignoring the warning to bypass building it would be enough to run it, but not enough to overcome what the security warnings were about. An alternate port would have to be used anyway, or they would have to weigh the risks of running it.
 
I'm not talking about the config files. You also need to maintain all the data that the config files point to. File backups, specific software versions, ZFS snapshots, etc. Or are you gonna go through the entire process of re-acquiring LibreOffice for FreeBSD 11-RELEASE by taking your chances that the correct versions are still available in public git repos? Even if you can automate things with Ansible or Puppet, there's just no way to avoid maintaining your own archive for quick disaster recovery. And proper disaster recovery strategy really means having 2 or 3 copies in as many different places, to boot. 😩 Gotta think things through, all the way through.
Why would I want to keep old cruft around?
The whole point of using something like ansible is to (re)build hosts with the latest packages and not from images with outdated or even EOL software.
 
Why would I want to keep old cruft around?
The whole point of using something like ansible is to (re)build hosts with the latest packages and not from images with outdated or even EOL software.
Then just maintain one image with up-to-date software, and dd that image onto other hosts' hard drives (not over the network, but over SATA/PCIe/USB). No need for no stinkin' complexity that ansible introduces, then. :p
 
The whole point of using something like ansible is to (re)build hosts with the latest packages and not from images with outdated or even EOL software.
I would think the point of automation such as ansible would be to build *repeatable* system state, not necessarily the latest. That is, if you run the same playbook (or whatever) 3 times, each time you should get exactly the same thing. Especially in production you don’t want surprises. I haven’t used ansible so this is just speculation.
 
I'm not talking about the config files. You also need to maintain all the data that the config files point to. File backups, specific software versions, ZFS snapshots, etc. Or are you gonna go through the entire process of re-acquiring LibreOffice for FreeBSD 11-RELEASE by taking your chances that the correct versions are still available in public git repos? Even if you can automate things with Ansible or Puppet, there's just no way to avoid maintaining your own archive for quick disaster recovery.
What about next “hw failure backup strategy”:

Inside case of each server there in motherboard (rarely on RAID controller PCIe card) are 1-2 USB free connectors who which You may attach 8/16/32 Gb flash memstick.

In case of main RAID card die, boot drive die or corrupted, system follow boot order and boot from this “internal memstick”.
(Booting from PXE is another things, because infrastructure fw applience may prohibit this option, or not working correctly).

This memstick are GPT (UEFI/BIOS), that:
1. it is custom made bootable FreeBSD RELEASE + post-install scripts memstick;
2. contain snapshot archive of working system of same server;
3. contain playbooks for Ansible for installing/updating/tuning all needed things in FreeBSD;

And after server was completely restored and working normally, there are croned shell script to creating and rewrite this “internal memstick”.

P.S. Is this method great for creating whole BSD system (not whole drive) snapshot and store on internal memstick + copy by VPN/SSH to cloud storage?
If not, please suggest the better way.
 
P.S. Is this method great for creating whole BSD system (not whole drive) snapshot and store on internal memstick + copy by VPN/SSH to cloud storage?
If not, please suggest the better way.
Read that thread very carefully. Even OP asks about checking the integrity of the backups, and is aware of that important ptifall in the method.

With backups, I'd prefer to deal with whole files, rather than streams of bits. First have a usable backup created locally, and then copy that anywhere you like. With streams of bits, you run a much higher risk of corrupt backups. Streaming is OK for Netflix movies, but I'd avoid streaming as a backup mechanism. Files need to be whole every step of the way.
 
Read that thread very carefully. Even OP asks about checking the integrity of the backups, and is aware of that important ptifall in the method.

With backups, I'd prefer to deal with whole files, rather than streams of bits. First have a usable backup created locally, and then copy that anywhere you like. With streams of bits, you run a much higher risk of corrupt backups. Streaming is OK for Netflix movies, but I'd avoid streaming as a backup mechanism. Files need to be whole every step of the way.
Totally agree with You.

What the best way You suggest for creating “snapshot” of whole working system?

Because we discussing about servers, a lot of files be created/changed/deleted during making “snapshot” of a working system.
So creating snapshot “on a fly” may be not possible...

Is the only one option when pulling server from work on maintenance time -> booting from removable memstick -> creating snapshot of whole system -> check snapshot consistency -> reboot server and push it back to work environment ?
 
Totally agree with You.

What the best way You suggest for creating “snapshot” of whole working system?

Because we discussing about servers, a lot of files be created/changed/deleted during making “snapshot” of a working system.
So creating snapshot “on a fly” may be not possible...

Is the only one option when pulling server from work on maintenance time -> booting from removable memstick -> creating snapshot of whole system -> check snapshot consistency -> reboot server and push it back to work environment ?
Study ZFS in the Handbook. It's perfectly possible to create a snapshot of a working system on the fly if you use ZFS. ZFS does have a very different design than UFS.

Study your systems, diagram out your plans, and find ways to practice your ideas. Try to connect the dots between documentation and 'best practices' to what you actually have on your systems. Buzzwords you see on the Internet are meaningless if you can't mentally make a connection to the systems you actually control.
 
Study ZFS in the Handbook. It's perfectly possible to create a snapshot of a working system on the fly if you use ZFS. ZFS does have a very different design than UFS.
ZFS on all systems already.

I read a lot on forums about ZFS ideology, how ZFS used in RAIDs and make conclusion this is the best (stability, installbase, support community, hw vendors support, etc...) filesystem for the next 7-10 years. ;)

I just asking opinion about some toolset, that may be usable.

Study your systems, diagram out your plans, and find ways to practice your ideas. Try to connect the dots between documentation and 'best practices' to what you actually have on your systems.
Agree. This is my way. ;)

Buzzwords you see on the Internet are meaningless if you can't mentally make a connection to the systems you actually control.
Agree. The same feeling.
 
ZFS on all systems already.
Well, the initial setup method still matters. It starts at fresh system install.

Elsewhere on these forums, I've seen confusion between a couple methods:
  • using using ZFS for the whole disk from get-go
  • partitioning the disk using other 3rd-party tools, and trying to tell the FreeBSD installer to find those partitions and use ZFS instead of UFS at that step.
The first method is frankly simpler, saves a LOT of prep-work, and opens up amazing flexibility down the road - not even Linux can boast that. And yes, that flexibility includes taking a snapshot of the system on the fly.

Once you have those snapshots, then you can organize your backups.
 
I'd prefer to use the term technology instead. ;)
There's a difference between ideology, technology, and design. I see the etymology, but at this point, it's just nitpicking on natural language technicalities. My preferred term is design logic. I do see design logic dictating how the filesystem is meant to be used.
 
it's just nitpicking on natural language technicalities.
With all respects in this case it is not nitpicking.

https://en.wikipedia.org/wiki/Ideology said:
An ideology is a set of beliefs or philosophies attributed to a person or group of persons, especially as held for reasons that are not purely epistemic, in which "practical elements are as prominent as theoretical ones." Formerly applied primarily to economic, political, or religious theories and policies, in a tradition going back to Karl Marx and Friedrich Engels, more recent use treats the term as mainly condemnatory.
 
With all respect in this case it is not nitpicking.
Yeah, looks like I was conflating a couple terms: logic and ideas. When technical ideas have logical alignments, I'd imagine that design is a more appropriate English term. Both Russian and German natural languages have their share of long words that are a fusion of 2 or even 3 separate words that have been shortened or otherwise modified for convenience of pronunciation. That modification is often at the base of puns and mis-understandings.
 
Don't use bash(1), as you'll have to install it 1st. Perhaps you can base it on this
Bash:
#!/bin/sh
env ASSUME_ALWAYS_YES=yes pkg-static bootstrap
while read pkgname; do
    env ASSUME_ALWAYS_YES=yes pkg install ${pkgname}
done < pkg_list.txt | tee pkg_autoinstaller_$(date +%Y%m%dT%H%M%S).log

Or maybe even
Bash:
env ASSUME_ALWAYS_YES=yes pkg-static bootstrap
env ASSUME_ALWAYS_YES=yes xargs pkg install < pkg_list.txt | tee pkg_autoinstaller_$(date +%Y%m%dT%H%M%S).log

How to add in this script to “pkg_autoinstaller_” the name of host (for example “server.lical.net”) ?

I try to seek in a Global Environment Variables, but unsuccessfully...
 
How to add in this script to “pkg_autoinstaller_” the name of host (for example “server.lical.net”) ?

I try to seek in a Global Environment Variables, but unsuccessfully...
Use something like
Bash:
command | tee pkg_autoinstaller_$(hostname -f)_$(date +%Y%m%dT%H%M%S).log
For csh(1) replace $(subshell-command) with `subshell-command`. For example echo pkg_autoinstaller_`hostname -f`_`date +%Y%m%dT%H%M%S`.log.
You can even run pkg(8) remotely: control# cat pkg_list.txt | ssh root@remote env ASSUME_ALWAYS_YES=yes xargs pkg install | tee pkg_autoinstaller_remote_`date +%Y%m%dT%H%M%S`.log. This way cat(1), tee(1) and date(1) will run on the control (local) host, whereas pkg(8) will run on the remote host.
 
According “ideilology, technology, design..” discussion: astyle, getopt, gentlemens, may I buy a bottle of whiskey for both of You to make discussion more comfortable because a weekend? ;)

My mistake, the word “design” would be right in that exactly context.
 
You can even run pkg(8) remotely: control# cat pkg_list.txt | ssh root@remote env ASSUME_ALWAYS_YES=yes xargs pkg install | tee pkg_autoinstaller_remote_`date +%Y%m%dT%H%M%S`.log. This way cat(1), tee(1) and date(1) will run on the control (local) host, whereas pkg(8) will run on the remote host.
Thank You!
For things like this I prefer Ansible rather typing in Terminal app, less possible to make mistyping...
 
Last edited:
Back
Top