Moving a Debian server over to FreeBSD

Linux user for best part of 20 years. Long story short, am fed up with the slow degradation of Linux. The needs of the desktop/laptop user have infiltrated the server side and made what I used to think was an efficient and tidy system a dog to use. So I want to give FreeBSD a serious crack on my main server, something I've wanted to do for years.

It's a generic Core 2 Duo motherboard from 10 years ago but is easily fast enough for the workload. The only interesting angle, hardware wise, is I'm currently using Linux md RAID5 over 4 disks. Would ZFS be the right choice? The disks are plugged into a SIL 3114 "RAID" card (I do not use the crappy RAID facilities and expose the four disks):

03:01.0 RAID bus controller [0104]: Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] Serial ATA Controller [1095:3114] (rev 02)

It's an old card but is adequate for the task. Is ZFS the only way to do software RAID or are there other options? Ethernet is on board:

02:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 02)

I wont be doing the install for some time as I need to have a look at the userland side of things too: Samba, Apache, MiniDLNA and some other services. I also use the box for my retro computing projects and AVR dev. If I get stuck I'll post on one of the other forums I guess.

Lawrence
 
Is ZFS the only way to do software RAID or are there other options?
Yes we also have raid on UFS via geom. We have gmirror(8), gstripe(8) and graid3(8) and you can nest them for Raid 1+0 and Raid3+0 among other configurations.
There is also a geom soft raid for motherboard raid called graid(8).
I do believe the SIL3114 is supported.
Realtek LAN can be rocky but that older chipset is probably OK.
If you do go the ZFS route you want to put as much RAM as possible for your motherboard.
 
First of all: check out the FreeBSD handbook. Pretty much all of the basic & common questions are answered in there, and it's generally a good place to start.

RAID can be done in a multitude of ways, where ZFS is probably the most straight forward however... considering the age of the server (I assume it's 32bits?) it might not be the best choice here. ZFS is very resource intensive, and likes a lot of memory. So unless you have 4Gb worth of memory I wouldn't even consider ZFS at all.

The other option for software raid would be gmirror(8), used together with the UFS filesystem. Either used on a per-disk basis or per-partition, the procedure is also covered in the handbook.

Alas, word of the wise: FreeBSD is not the same as Linux so whatever you do be sure not to treat it as such, because that can definitely cause some issues over time. The userland may seem familiar but you'll soon notice that several programs behave "slightly" different or maybe less "easy" than Linux does things.

So, for example: don't bother building your own kernel if you don't have any specific requirements, because unlike on Linux you won't gain much advantages from it (most definitely not a better optimized kernel "just because"). Understand the separation between the base OS and the software installed on top (normally found in /usr/local) and don't try to cheat that system somehow.

Also, maybe good to know, if you still need (or want) to keep a Linux environment around then you can. FreeBSD provides a Linux compatibility mode (see emulators/linux_base-c7 as well as the handbook) which isn't perfect but it might be able to help you with your transition. Based on CentOS but even so, it could help.

So yeah, good luck with the whole project.
 
Yes we also have raid on UFS via geom. We have gmirror(8), gstripe(8) and graid3(8) and you can nest them for Raid 1+0 and Raid3+0 among other configurations.

I like the sound of graid3. It's 4 disks in RAID5 on the Linux install but I use one as a cold spare. I guess I could get the same result with 1+0 but I'm liking graid3 - I'll play with it on a VirtualBox setup.

There is also a geom soft raid for motherboard raid called graid(8).

I've shied away from those. It might work with the SIL but the disks are then "at a distance" and those controllers aren't a substitute for proper hardware RAID.

Realtek LAN can be rocky but that older chipset is probably OK.

I could stuff an Intel card in the box if it proves to be flakey.

If you do go the ZFS route you want to put as much RAM as possible for your motherboard.

I now remember reading about the memory reqs of ZFS. 4GB in the machine, I'm not sure if it'll take more. Thinking about it, I'm leaning towards playing with graid3 first, checking out the performance etc.

When it comes time to copy the data across, I'm guessing there's an ext4 fs layer? Luckily I'm not using btfs or any of that modern stuff on that machine.

First of all: check out the FreeBSD handbook. Pretty much all of the basic & common questions are answered in there, and it's generally a good place to start.

Thanks! I'm embarrassed to have not found that before in my travels.

RAID can be done in a multitude of ways, where ZFS is probably the most straight forward however... considering the age of the server (I assume it's 32bits?)

Yes I've come to the same conclusion. The box is 64bit; it's a Core 2 Duo. It goes against the grain for a box which is basically a fileserver with a few knobs on to need that much RAM. Prior to having a Core 2 Duo board I used a VIA C7 w/ 1GB RAM on the same disk setup for years and it was fine.

Alas, word of the wise: FreeBSD is not the same as Linux so whatever you do be sure not to treat it as such, because that can definitely cause some issues over time. The userland may seem familiar but you'll soon notice that several programs behave "slightly" different or maybe less "easy" than Linux does things.

The first Unix system I used was HP-UX in 1995. So not to worry. I've also been using OS X (for it's Unix roots) for a decade and whilst it's obviously not the same, I've trained my brain to switch between it and my Linux boxes. It's a classic case of "same but different". I've already copped out and done "pkg install bash". :)

So, for example: don't bother building your own kernel if you don't have any specific requirements, because unlike on Linux you won't gain much advantages from it (most definitely not a better optimized kernel "just because").

Most folks don't bother building there own Linux kernel these days. I used to do it occasionally even up to 5 years ago, but there's almost no point now. Pretty much everything is modular or tweakable via sysctl.

So yeah, good luck with the whole project.

Cheers! Luckily there's VirtualBox so I can do most of the research without knocking down the current Debian install. :)
 
4GB is enough for a file server with ZFS. More memory is good when you need dedup, but is not mandatory if dedup is not used.
 
I am trying to remember if I had to reset permissions once I moved things to a FreeBSD server. (Sometimes with fuse). Though you could do it over the network with rsync. (And again, there may be permissions to be reset.) Can someone confirm or correct that? (If permissions need to be reset or not).
 
I like the sound of graid3. It's 4 disks in RAID5 on the Linux install but I use one as a cold spare. I guess I could get the same result with 1+0 but I'm liking graid3
The advantage to me of graid3 over RAID1+0(gmirror+gstripe) is usable disk space and absolute ease of use.
RAID10 eats too much space for me. graid3 seems like a reasonable compromise with good speeds too.

I have to tell you nobody on the forum seems to use graid3 as I have a thread up here and crickets.
I practiced pulling a drive and rebuilding the array before I went forward with it. I am using a 5 disk array in a 1U SM chassis.
Checking how long a drive takes to rebuild was good. That seems to be a major sticking point on any software RAID array.
The remaining members become stressed attempting to rebuild (or resilver in ZFS parlance) a degraded array.
ralphbsz gives some very informative advice here:
https://forums.freebsd.org/threads/graid3-usage.68092/
 
The advantage to me of graid3 over RAID1+0(gmirror+gstripe) is usable disk space and absolute ease of use.
RAID10 eats too much space for me. graid3 seems like a reasonable compromise with good speeds too.

The current setup is RAID5 over 4 disks with 1 cold spare, giving a capacity of 2 disks worth. This has been fine really, and I've previously had a disk go bad and have been glad of the cold spare. Yes, I like simplicity!

I have to tell you nobody on the forum seems to use graid3 as I have a thread up here and crickets.

Interesting. Is this because:

* Memory is cheap that it does not seem strange to stick 8GB in a server for a small LAN?; or
* ZFS is cool and new; or
* ZFS offers compelling advantages

I don't care for new and cool. I assume ZFS has on-the-fly resizes which does sound nice though. RAID5+ext4 on Linux has certainly done the job for me for about 10 years now with no observed fs corruption. And in that time I've had 2 disks die and have cycled them through (the first time I didn't have a cold spare, which was a hairy couple of days waiting for one to turn up!)

I practiced pulling a drive and rebuilding the array before I went forward with it. I am using a 5 disk array in a 1U SM chassis.
Checking how long a drive takes to rebuild was good. That seems to be a major sticking point on any software RAID array.

One slight annoyance with my current migration plan is I have no spare server or disk set. I'm going to have to install over the top, but I do have 2 external disks with the content of the current array on them. (These have ext4 fs's on them) So I can't spend too long practising on the live box, though I will do so on my VirtualBox machine, which I've already played about making a graid3 array on.

Sync time is a function of disk size, mostly. I've seen hardware arrays take days to sync, too. This leads to an interesting question: in Linux md RAID you can mount and use the fs before the sync is finished. Is this the case with graid3?

Since the disks I'm using are piddly, and 2 of them are a few years old, I might take the opportunity to look at a bigger disk set. It'll make me a bit more confident knowing I can put everything back together if it all goes wrong too (which I doubt very much it will). I've not followed disk prices lately though...

The remaining members become stressed attempting to rebuild (or resilver in ZFS parlance) a degraded array.
ralphbsz gives some very informative advice here:
https://forums.freebsd.org/threads/graid3-usage.68092/

I shall have a read. I've been there, waiting 12 hours for a rebuild. It's not nice. But this is what backups are for, right. ;)
 
I am trying to remember if I had to reset permissions once I moved things to a FreeBSD server. (Sometimes with fuse). Though you could do it over the network with rsync. (And again, there may be permissions to be reset.) Can someone confirm or correct that? (If permissions need to be reset or not).

Yes, if the UIDs/GIDs don't match there will be a mess with permissions. tar generally saves the mapping in the tarball so will normally recover the owner/group assuming the user/group exists on the target. Rsync has so many options; it can almost certainly map users/groups like tar does.

Luckily my fileserver only really has 2 real users on it, so fixing perms will be trivial anyway.
 
One slight annoyance with my current migration plan is I have no spare server or disk set. I'm going to have to install over the top, but I do have 2 external disks with the content of the current array on them.
Then be sure to first boot cleanly with a rescue environment (install CD for example) and then save the output from dmesg somehow. Or grab /var/run/dmesg.boot; this will give you a good insight into what hardware gets detected by FreeBSD which should prepare you for possible surprises.
 
Hi, if you are considering replacing the existing small disks, buying two or three new modest size SSDs might be a viable option for the FreeBSD-based RAID solution of your choice. You can velcro them into place, and have your old and new operating systems running on the same hardware. That way you have a full retreat path, if required.
 
I am trying to remember if I had to reset permissions once I moved things to a FreeBSD server. (Sometimes with fuse). Though you could do it over the network with rsync. (And again, there may be permissions to be reset.) Can someone confirm or correct that? (If permissions need to be reset or not).

Permissions won't change, but UID/GID have do be adjusted if they don't match. You can use rsync and map the uid/gid or just use something like for i in `find ...`; do rsync $1 && chown $1; done.

For a fileserver I'd use ZFS any time. Try to put in some more RAM (DDR2 is dirt cheap to get nowadays...), and/or limit ZFS ARC size if necessary, but I'd say the benefits of ZFS should be enough to consider it as the first choice especially in a fileserver. ZFS offers strong data integrity, self healing and easy pool expansion. Snapshots are also a major advantage over "classical" filesystems and make backup/recovery _A LOT_ easier.


BTW: did the same with my storage server (and all other servers) at home a few years back. I basically nuked everything from orbit (except user data) and only kept some configs for some services as a reminder/template. First thing you will notice: FreeBSD has a much better filesystem hygiene and keeps its files cleanly and predictably in the correct paths - linux can be quite annoying and splatters working files in places where they definitely don't belong (e.g. dhcp lease files in /var/lib...). In that regards FreeBSD (as well as other BSDs and UNIX descendants like illumos) make it much easier to keep a system clean and maintainable, especially if you start using jails (or zones) to separate services/applications from the host system.
 
Back
Top