Safely updating from 6.2

I need some advice on dealing with an old system. I am trying to setup an automated backup system based on this rsbackup system (https://forums.freebsd.org/threads/3689/).

However, the most important server in our company is running FreeBSD 6.2. It hosts our customer and inventory databases, Intranet sites, and a bunch of Perl scripts that run everything in the business, and nothing has any documentation. I have plans to rebuild this server in the near future but figuring out how to do this with out breaking anything is taking time. For the time being, I just want regular backups.

The problem I am running into is installing ports, like sudo. pkg is not installed and building the packages errors out due to the port tree not matching the OS version or something like that. So I have two things to figure out: first is how far can I update this server without risk of breaking things like Perl or Apache? I am thinking if I stay within 6.x I should be ok, but I can't take a chance with this. Second, whats the oldest release that is supported by the current port tree?

I was able to do a dump of the system so if everything breaks I should be able to restore the system, but I need to minimize down time. I could probably get away with maybe 30min to an hour on the weekend, but any more would be a problem.
 
You will not be able to either install new ports or upgrade this system. Your only option is to make backups of your data and reinstall from scratch.
 
Buy new hardware (I'm guessing the hardware is just as old as the OS), build new server, migrate data. Really. You're going to put yourself in a lot of hot water trying to upgrade it in-place. While you're busy building the new server the old one can still be used. Once you're happy with it migrate the data and have everyone switch to the new server. That will be the minimum downtime and least risk of botching things up.

Seriously, FreeBSD 6.2 has been End-of-Life for almost a decade now, even the last 6 version (6.4) was deprecated 7 years ago. Stop using it.

While you're at it, as this server has become so important, think about splitting things up to different servers. More hardware means less risk (for the business) if one of those servers dies for whatever reason. So instead of buying one big server to do everything, buy a bunch of smaller ones and spread the functionality.
 
While you're at it, as this server has become so important, think about splitting things up to different servers. More hardware means less risk (for the business) if one of those servers dies for whatever reason. So instead of buying one big server to do everything, buy a bunch of smaller ones and spread the functionality.

Not only this, but if possible create redundancy so if one server/array goes down you've got another to serve the same function.
 
Agree with everything others have said, but want to add one thing.

Clearly, this one server is very important to this business, so much that they want it to run all the time. That makes sense. But doing regular upgrade is also important. Imagine that you do what was suggested above (buy new hardware, install FreeBSD 11.1, and transfer to it): in 10 years, you'll have the same predicament of needing to upgrade to FreeBSD version 17 and being way behind. The real important thing is to also put a plan in place to stay reasonably current.

Here is a proposal (the details aren't important): Since hardware is very inexpensive these days, buy three new servers, all identical. Two are production: One that is in use, and one that is a hot standby server. The standby is running exactly the same software, and gets a full copy of the data from the running one say every hour (rsync() might be your friend, but with databases and transactions it gets a little more complicated). Like that, if the primary server suddenly dies (say power supply catches fire, yes I have seen that happen), within a few minutes the business can be back up and running, and no more than one hour behind on work. The third server is a sand box for upgrading. Every week, the third server gets regular upgrades, so it stays at current software levels. After each upgrade, it gets a copy of the data from the primary server, and you run a test suite to make sure the upgrades haven't broken anything (that is, regression testing). And then once a week there is a 1-minute downtime where you take the primary server down, copy the current data to the upgraded one, and start running on the upgraded one. Right after that, you upgrade the backup node. With three nodes, at any given moment you have a primary node, a backup node, and an upgraded node, and every week they change places.

Ideally, all this gets automated with scripts, so no direct human intervention is required (in particular not in the middle of the night), and humans are mostly required to check that the automation is working correctly.
 
He means functional application tests. So someone that works with the applications on that server needs to check if everything is still working after the updates have been applied.
 
Hi SirDice,

thank you. Since the sentence mentioned upgrade, I was not sure whether the intent was to test the upgraded OS or the applications.

Kindest regards,

M
 
That was assuming you would have that 3 server setup ralphbsz mentioned. Updates need to be scheduled on a regular basis regardless. Some do this once a week, others once a month. Never updating a server (and let it 'rot' for 10 years) is just bad management. Planning and updating servers is one of the core responsibilities of a good system administrator.

I have some idea how this happened. The server was once set up (probably by previous admins) and (upper)management prevented updates/upgrades because that would endanger the "status-quo". It works, so don't touch it. I've been there. Trying to convince them upgrades/updates are a necessity is a royal pain in the posterior.
 
Just to reiterate: The details of my proposal above aren't important. You can do this with 2 servers, with 4 servers, with the upgrade sandbox server being a VM (perhaps rented somewhere in the cloud). You can do the upgrades daily, weekly, monthly. The important thing is to have a plans in place for (a) what to do when the primary server dies unexpectedly, and (b) keeping the server up-to-date. The things I'm really pushing here is to combine these two goals to reduce the amount of work (by using one of the backup servers as an upgrade sandbox), to automate the process as much as possible (so it actually gets done), and to put a test in place to make sure the upgrade actually has succeeded. And the measure of "success" needs to be at the application and user level: for example, having a correct FreeBSD installation, but forgetting to copy the database (or worse, copying an obsolete one from a month ago) is not a success.
 
Thank you for this wonderful discussion. The most critical thing on this server is our customer database and I am currently trying to get moved over to its own SQL server.

I have some idea how this happened. The server was once set up (probably by previous admins) and (upper)management prevented updates/upgrades because that would endanger the "status-quo". It works, so don't touch it. I've been there. Trying to convince them upgrades/updates are a necessity is a royal pain in the posterior.

That's pretty much it, except the previous admins and upper management is one and the same, and due to a long standing policy of security through obscurity (and no documentation) its going to be very difficult to determine what gets migrated and what brakes if we move to a new server.

I am thinking that after I get the database moved off to its own server, then the server can be replaced by a couple VMs
 
Personally, I'd stay away from VMs as redundancy, as they only provide "virtual" redundancy. Lose physical hardware on the host, and all three go down.

Given that you're running FreeBSD 6.2, I'm guessing that machine is around ten years old. A couple consumer-grade boxes with something like an I3-7300T probably provide much greater compute power, faster drives (especially with SSDs), and much lower power consumption than just about anything from the 2008-2009 era.

I don't know the "sensitivity" of the data and your corporate policies, but there is at least one virtual-hosting service that I consider reliable that can get you a decent FreeBSD instance running for $5-10 a month and provide backup services at a very modest cost as well.

Edit: You might also want to consider using ZFS and perhaps beadm(1) which can provide the ability to snapshot and roll back the OS independently of your data
 
The most critical thing on this server is our customer database and I am currently trying to get moved over to its own SQL server.
If you want to do this properly use 3 servers. One is the 'main' server containing the data everybody uses. The second server is a read-only slave of the first (use master-master replication). That's your fall-back in case server one dies, you remove the read-only, arrange/allow access and go. Then you have a third server, that's also a slave of the first, main, server and it's forced to stay about an hour or so behind. That's your backup, dump its databases/tables at regular intervals and store, preferably, off-site.
 
Back
Top