New server motherboard - need tips for smooth transition

I took advantage on some sales at NewEgg, and will be upgrading my server this weekend. I'll be switching out my AM3 motherboard/CPU for a Ryzen. I'd like to make this transition as smooth as possible. Right now, the only things that will change are the motherboard, CPU, and RAM. I'll be keeping the same NIC and storage drives.

I currently have 4 hard drives, 2 250GB drives Z-mirrored for boot and OS, then 2 4TB drives Z-mirrored for /home. The system is running FreeBSD 13.1-RELEASE.

In an ideal world, I could switch out the motherboard, and reconnect the drives to the same numbered SATA ports, (SATA-1 on the old board, connect to SATA-1 on the new board, etc.) and everything will "just work." I know this won't be the case, as I've worked with computers far too long to be that optimistic.

I know backing up data and config files before switching - I regularly back up my data anyway. What I'd like are some tips to help ensure that my zpools don't get messed up and I end up having to re-install the OS. Any tips for me?
 
Actually, it may just work. Last time I did something similar, I stopped in the BIOS made sure the "boot drives" were recognized as bootable, and let it rip. It did boot fine.

So make sure the new motherboard is configured to boot the same as the old because the loader is different. UEFI vs BIOS (gptzfsboot) comes from the boot devices.
If the NIC is an add in card, worst case "eth0" is now "eth1" etc.
/home: I would leave those unplugged for the first boot, log in as root and verify everything. Then power down, plug in /home and power up.

I would also make sure that before powering down the old, the system is configured to boot to console, not into a graphical login. There may be differences in X config between old and new hardware.
If you already boot to console, double check X config or move any existing X config out of the way before startx.
 
I was faced with a similar task about a year ago. The motherboard on my ZFS server had failed. Not sure if it was the CPU, or the motherboard itself, but it was all old enough that an upgrade of the motherboard, CPU, and memory was warranted.

I had a ZFS root mirror'd on a pair of SSDs, and a separate large ZFS tank on spinning disks. I did everything suggested by mer above. Plus I exported the tank prior to the hardware upgrade.

On the new hardware, I checked the "boot disk" settings in the BIOS and made sure that the boot order had the SSDs first and second. I also set the boot method to what was used on the old motherboard (BIOS/Legacy).

I booted single user with the new motherboard, and it worked. I then imported the tank.
 
I did similar last year, getting a dual xeon server board, putting it into the desktop for a few months for testing, then finally into the server. The main problems were
- needs an extra graphics card as none is in the cpu.
- needs an extra soundcard as none is on the board (was no problem, USB, 2.50 EUR incl freight)
- needs a way to move out all the heat when running with load (that's the real problem, moving some 200 m³ filtered air per hour).
 
Plus I exported the tank prior to the hardware upgrade.
Oooh. That's a good step that I know I did but forgot to mention. zpool export is like "zpool scrub" on steriods. It's specifically designed to make sure the whole thing is consistent.
 
Thanks for the tips.

You reminded me of something I'll have to make sure to check. The current system is BIOS only, no UEFI, so I'll either have to change up the boot process for FreeBSD, or set the system to use the CSM compatibility mode.

I don't have X installed on the system, so that removes that complexity. I might install it with the new system just to play around with the X server and remote sessions, but for now, nothing to worry about.

I'll look more into zfs export. That looks very promising for data integrity!
 
Make sure all your partitions are labelled, so if the disk numbering gets swizzled, you can debug it with "gpart list" and the -l switch. If you are using /etc/fstab, then make sure all the disks are identified by label in there. And make sure your disks are physically labeled (white tape and pen), with something that allows you to in emergencies match the hardware to the partition.
 
zpool export is like "zpool scrub" on steriods. It's specifically designed to make sure the whole thing is consistent.
What exactly happens when one exports a zpool but never explicitly imports it afterwards? I assume not much at first, but what happens in case you export it again at a later time (i.e. two consecutive exports)?

I haven't though of zpool export as " zpool scrub on steroids" but your statement made me wonder (as usual: thanks!)
 
jbo:
If I recall correctly, when you zpool export it is roughly equivalent to doing "umount" on a UFS filesystem. It is specifically meant to internally make all the ZFS data consistent, the intent would be you could then physically move the disk to a different system where you do zpool import which "mounts" it and makes it available to the new system.

I don't know if you can do zpool export without having done an import first or worst case it would be a no-op.

"man zpool-export and man zpool-import"
 
What exactly happens when one exports a zpool but never explicitly imports it afterwards? I assume not much at first, but what happens in case you export it again at a later time (i.e. two consecutive exports)?
zpool-export(8) causes all datasets within the pool to be unmounted, and effectively removed from the ZFS configuration. If the pool has exclusive use of the disks, those disks can be removed, and relocated to another host, if you wish.

When you import the pool, ZFS re-assembles everything based on the contents of the disks. Even on the same host, the special files (/dev/xxx) may move around (if you move cables, add devices, or change slots), but the re-assembly will still work when you do the import. This does not mean that you don't need good labels, because you will eventually need a mechanism to identify individual spindles.

The question of exporting a pool twice (without first re-importing it) does not arise. You can only export a pool once -- at which point it's ostracised. You may then either forget about it, or import it on some zfs server.

[ZFS still knows what's on each disk that it can see, which is how it figures out to do an import, so "ostracised" means "almost completely ignored", but not "invisible".]
 
If you created the zpool with device names (/dev/disk3) instead of labels (gpt/HoogeDisk), you're going to have a bad time.
 
If you created the zpool with device names (/dev/disk3) instead of labels (gpt/HoogeDisk), you're going to have a bad time.
I don't believe that's universally true. Once a pool is exported, you can import it, and the device names don't matter, just so long as the operating system can read the disk labels and contents, and thus ZFS can access the metadata to track the role of each provider in a pool.

I agree that using GPT labels is sensible in all contexts, and probably mandatory in the context of migrating pools between different operating systems.
 
Maybe. I did some test pools on my server with them and it did not go well when I switched controllers and I ended up just making new pools to get stuff to stop complaining about ghost vdevs. Fortunately I hadn't filled up all the disks.

OP is looking for gotchas, I think this might be one.
 
If the pools were created with /dev/ada0p1 etc carefully labelling cables and plugging them into the same ports on the new mobo reduces the risk.
Why? Because almost all motherboards use very similar topology. Detects a SATA controller with 4 ports, so those appear as ada0-ada3. Paying attention to the documentation on the new and old you can minimize (not eliminate) problems like that.
If the devices are not plugged into the standard controllers but into an extender of some sort, you probably will have issues.
 
Remember to export the pool before migrating the disk to the new server. After migrating the disk to the new server if the disk path is different you can import the pool regardless of the name of the disks by using "-d" option and ignore the cachefile. The disks in the pool are identified by it's internally generated GUIDs by ZFS but the path to the provider is not guarantee to be always the same that's why is good idea to use GPT labels instead of disk path so when you move the disks around or change they port they will remain the same for the ZFS. But then there's another problem when you have faulty disk i mean really faulty at 100% which can't be detected by the motherboard it will be hard to identify the disk so you have to print some labels on the disk itself to be able to identify which disk you need to replace.


 
I was finally able to move things over this weekend, everything from a FreeBSD perspective went smooth without issues. I made sure I moved drives from SATA1 to SATA1 etc., and the system picked up as if nothing had changed, just running a lot faster now! Thanks for the all the tips.
 
Glad to hear that you had a smooth transition!
However, I'd still strongly recommend setting up GPT labels for your drives. It will save your sitting-meat one day.
 
jardows Thanks for the update and glad it went smoothly. I agree with jbo I've found labelling with the serial number or part of it helps because its usually on the outside of the device you want to replace :)
 
Back
Top