Well.. Might as well join in a bit, for what's it worth anyway. Apologies up front if my comment here is bordering a rant, I tend to get passionate about some of these things
For the record; I run a small company where the main focus lies on website hosting, web-based development (mostly focussed on Microsoft (ASP.NET)), systems administration and occasionally software development in general. If all goes well then this week will mark the end of a migration project I've been working on for the past months; moving my Linux CentOS server park from one hosting provider to the other, and in between also moving away from CentOS to FreeBSD.
Now, I don't keep track of everything I do. When it comes to tasks such as systems maintenance (which is usually something I do on a weekly basis) I simply get to work. But the funny thing is; updates always start on my (dedicated) backup server. Although primarily used for storage (and as backup MTA) it also contains the exact same environment which is used on the main web servers.
The idea should be obvious: updates are first installed and tested on this server and from there on provided to the rest. So I logon, do my thing (installing updates, testing, checking changelogs (if applicable), etc.) and then log off.
Enter
last(1)...
Now, call me crazy if you must, but I'm a huge fan of Microsoft Excel. The things I can do with that stuff.. I've build logfile analysers (mainly for Exim and Postfix), used it to keep track on some of my projects (such as the previously mentioned migration project), calculated the profits I could get from hosting by comparing my costs with the prices I charged customers, did the same when moving hosting providers; this time also trying to calculate for the time I had to invest, etc, etc.
In my opinion it's quite a powerful tool, especially when combined with the underlying VBA engine. Which is, once again in my opinion, another highly under appreciated environment.
One of the things I managed to work out was to initiate SSH connections straight from within VBA (basically using .NET routines, but let's try not to get too much offtopic), effectively allowing me to pick up output such as that from
last and import it straight into my spreadsheet.
Did you know Excel can do graphics too? ;-)
I can state for a fact that the time I took to maintain my FreeBSD servers has been a
lot less than the time I spend on CentOS.
Let's start with something trivial; you download updates for your software and you want to provision those to your other servers. What do you do?
When looking at Debian you'll soon come across
/var/cache/apt/archives where downloaded archives are kept. Yet you can't simply share this directory with other servers since it's basically 'managed'. The package management system keeps track of all this, even uses this directory for storage (as the name 'cache' implies). It's basically only meant for local use; copy the contents of this directory somewhere and you can quickly restore your server.
But what about other servers? Well, for that you'd need to set up your own repository. There are tools provided which can do this, and some of those are really impressive, but it's always an extra step which you have to take
apart from updating your server.
Worse; its a step which only costs extra time. You don't gain much by doing this (read on..).
@SirDice already mentioned this; but when updating your ports collection with
portmaster then all it takes is one extra parameter (
-g) to tell your system to build packages while using a command you would have used anyway. And guess what? You
can safely export
/usr/ports/packages if you want to.
Another aspect.. Because updates to the base system are separated from updates for '3rd party software' (Ports collection) you can plan any 'dangerous' updates much better. Now, this is of course a personal matter; but to me 'dangerous' is taking a risk that your system might not boot. For me that's a huge issue because then I'd get dozens of unhappy customers.
Which is what I was getting to above: the funny thing is that not doing an Apache or PHP update can have much more direct consequences. So on Linux you have no choice but to do the whole update, including kernel updates. And that takes more time, also because it is impossible to be 100% sure that a new kernel will run on all server instances. As a direct result; I now have to do a lot more extra testing
on every server I have to make sure that all the updates are running fine.
And only because I cannot afford to fall behind updates to PHP, Apache, SSH and so on because those are directly exposed to the outside world.
FreeBSD? Well, of course the same thing applies: when looking at
freebsd-update it is also something I need to do on a per-server basis. And because I'm using a higher security level (
kern.securelevel) I have to perform this in single user mode also.
But the main advantage here is that I can plan for this. On a per-server basis even,
without compromising server security by not applying updates to public exposed services.
Your mileage may vary, as is the saying, but on my end this structure has saved me a lot of maintenance time.
And before anyone else mentions this: yes, I am well aware that I can easily install a new kernel on Linux but simply not reboot the machine straight away, thus "plan" for the "update". Hopefully you also realize how stupid that actually is. Because what would happen if some freak error occurs in between; let's say an issue with remote storage which would now make it imperative to reboot the server to reset everything back to default? Now you're effectively booting into a completely untested kernel. Worse yet; it also wasn't planned in any way.
Sure; I can work around that. Then all I have to do is edit the boot manager (
/boot/grub/grub.cfg) after all the updates have passed and tell the system not to boot with the latest kernel. One way or the other; it would still take me more time than it does on FreeBSD.
And that's not even mentioning the time saved from using ZFS snapshots instead of daily incremental backups which I made using either
dump or
xfsdump (which is my favourite; I prefer the XFS filesystem over EXT3 or EXT4). Or the time I save when having to restore something (
/home/$user/.zfs/snapshot/ vs. copying the file from the remote server, using interactive restore when possible, then copying the file over).
All in all... Each to his own, but in my opinion FreeBSD is in general a lot more efficient than Linux. My Excel time sheet doesn't lie; Microsoft hates open source after all
.