Updates
2010-04-04:
We've rolled out version 3 of our rsbackup system. See this thread for more information on it. (Version 2 never really saw the light of day, but was used as a stepping stone to the refactored version 3.)
Intro
A co-worker and I developed a centralised backup solution using FreeBSD, ZFS, and Rsync. The following set of posts describe how we did it.
Note: this is fairly long, and includes code dumps from all the scripts and config files used.
Server Hardware
Our central backup server uses the following hardware:
OS Configuration
We're currently running the 64-bit amd64 version of FreeBSD 7.1. We'll be upgrading to 7.2 once it's released. And we are anxiously awaiting the release of 8.0 with ZFSv13 support.
Two of the gigabit NIC ports are combined using lagg(4) and connected to one gigabit switch. We're considering adding the other two ports to the lagg interface, but we're waiting for a new managed switch that support LACP before we do.
The 2 CF cards are configured as gm0 using gmirror(8). / and /usr are installed on gm0.
The 3Ware RAID controllers are configured basically as glorified SATA controllers. Each drive is configured as a "SingleDrive" array, and appear to the OS as separate drives. Using SingleDrive instead of JBOD allows the RAID controller to use the onboard cache, and allows us to use the 3dm2 monitoring software. Each drive is also named after the slot/port it is connect to (disk01 through disk24).
The 24 harddrives are also labelled using glabel(8), according to the slot they are in, using the same names as the RAID controller uses (disk01 through disk24).
The drives are added to a ZFS pool as 3 separate 8-drive raidz2 vdevs, as follows:
This creates a "RAID0" stripe across the three "RAID6" arrays. The total storage pool size is just under 11 TB.
We then created ZFS filesystems for basically everything except / and /usr:
We enabled lzjb compression on /usr/ports and /usr/src, and disabled it on /usr/ports/distfiles. And we enabled gzip-9 compression on /storage/backup. We also disabled atime updates on everything except /var.
2010-04-04:
We've rolled out version 3 of our rsbackup system. See this thread for more information on it. (Version 2 never really saw the light of day, but was used as a stepping stone to the refactored version 3.)
Intro
A co-worker and I developed a centralised backup solution using FreeBSD, ZFS, and Rsync. The following set of posts describe how we did it.
Note: this is fairly long, and includes code dumps from all the scripts and config files used.
Server Hardware
Our central backup server uses the following hardware:
- Chenbro 5U rackmount case, with 24 hot-swappable drive bays, and a 4-way redundant PSU
- Tyan h2000M motherboard
- 2x dual-core Opteron 2200-series CPUs at 2.2 GHz
- 8 GB ECC DDR2-SDRAM
- 3Ware 9550SXU PCI-X RAID controller in a 64-bit/133 Mhz PCI-X slot
- 3Ware 9650SE PCIe RAID controller in an 8x PCIe slot
- Intel PRO/1000MT 4-port gigabit PCI-X NIC
- 24x 500 GB SATA harddrives
- 2x 2 GB CompactFlash cards in CF-to-IDE adapters
OS Configuration
We're currently running the 64-bit amd64 version of FreeBSD 7.1. We'll be upgrading to 7.2 once it's released. And we are anxiously awaiting the release of 8.0 with ZFSv13 support.
Two of the gigabit NIC ports are combined using lagg(4) and connected to one gigabit switch. We're considering adding the other two ports to the lagg interface, but we're waiting for a new managed switch that support LACP before we do.
The 2 CF cards are configured as gm0 using gmirror(8). / and /usr are installed on gm0.
The 3Ware RAID controllers are configured basically as glorified SATA controllers. Each drive is configured as a "SingleDrive" array, and appear to the OS as separate drives. Using SingleDrive instead of JBOD allows the RAID controller to use the onboard cache, and allows us to use the 3dm2 monitoring software. Each drive is also named after the slot/port it is connect to (disk01 through disk24).
The 24 harddrives are also labelled using glabel(8), according to the slot they are in, using the same names as the RAID controller uses (disk01 through disk24).
The drives are added to a ZFS pool as 3 separate 8-drive raidz2 vdevs, as follows:
Code:
# zpool create storage raidz2 label/disk01 label/disk02 label/disk03 label/disk04 label/disk05 label/disk06 label/disk07 label/disk08
# zpool add storage raidz2 label/disk09 label/disk10 label/disk11 label/disk12 label/disk13 label/disk14 label/disk15 label/disk16
# zpool add storage raidz2 label/disk17 label/disk18 label/disk19 label/disk20 label/disk21 label/disk22 label/disk23 label/disk24
This creates a "RAID0" stripe across the three "RAID6" arrays. The total storage pool size is just under 11 TB.
Code:
# zpool status
pool: storage
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
raidz2 ONLINE 0 0 0
label/disk01 ONLINE 0 0 0
label/disk02 ONLINE 0 0 0
label/disk03 ONLINE 0 0 0
label/disk04 ONLINE 0 0 0
label/disk05 ONLINE 0 0 0
label/disk06 ONLINE 0 0 0
label/disk07 ONLINE 0 0 0
label/disk08 ONLINE 0 0 0
raidz2 ONLINE 0 0 0
label/disk09 ONLINE 0 0 0
label/disk10 ONLINE 0 0 0
label/disk11 ONLINE 0 0 0
label/disk12 ONLINE 0 0 0
label/disk13 ONLINE 0 0 0
label/disk14 ONLINE 0 0 0
label/disk15 ONLINE 0 0 0
label/disk16 ONLINE 0 0 0
raidz2 ONLINE 0 0 0
label/disk17 ONLINE 0 0 0
label/disk18 ONLINE 0 0 0
label/disk19 ONLINE 0 0 0
label/disk20 ONLINE 0 0 0
label/disk21 ONLINE 0 0 0
label/disk22 ONLINE 0 0 0
label/disk23 ONLINE 0 0 0
label/disk24 ONLINE 0 0 0
Code:
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
storage 10.9T 5.11T 5.76T 47% ONLINE -
We then created ZFS filesystems for basically everything except / and /usr:
- /home
- /tmp
- /usr/local
- /usr/obj
- /usr/ports
- /usr/ports/distfiles
- /usr/src
- /var
- /storage/backup
We enabled lzjb compression on /usr/ports and /usr/src, and disabled it on /usr/ports/distfiles. And we enabled gzip-9 compression on /storage/backup. We also disabled atime updates on everything except /var.