Software RAID 5 & FreeBSD

I've been running FreeBSD for a while now, and finally want to venture into using RAID with FreeBSD. I want to add a RAID 5 array to my FreeBSD server, and can't exactly afford a hardware controller at the moment. After a bit of Google'ing I've found info on geom raid5, and it seems to be used in FreeNAS so I'm guessing it has a decent rep.

Anyone have any personal experience with geom raid5? My array is going to be comprised of three 250 GB WD SATA II drives that I have laying around, the box is a AMD x2 3800+ with 4 GB RAM, mainly used as my development box, git/svn/cvs, ftp, file server, among other things. Data integrity is a concern as this will hold a lot of my work and backups.

Any thoughts or suggestions? Thanks in advance.
 
I'm using gvinum with RAID5 (4*500GB)..

Code:
root@molly:~#gvinum list
8 drives:
D r4                    State: up	/dev/ad7s1e	A: 0/475147 MB (0%)
D t4                    State: up	/dev/ad7s1d	A: 0/1535 MB (0%)
D r3                    State: up	/dev/ad6s1e	A: 0/475147 MB (0%)
D t3                    State: up	/dev/ad6s1d	A: 0/1535 MB (0%)
D r2                    State: up	/dev/ad5s1e	A: 0/475147 MB (0%)
D t2                    State: up	/dev/ad5s1d	A: 0/1535 MB (0%)
D r1                    State: up	/dev/ad4s1e	A: 0/475147 MB (0%)
D t1                    State: up	/dev/ad4s1d	A: 0/1535 MB (0%)

2 volumes:
V temp                  State: up	Plexes:       1	Size:       6142 MB
V raid5                 State: up	Plexes:       1	Size:       1392 GB

2 plexes:
P temp.p0             S State: up	Subdisks:     4	Size:       6142 MB
P raid5.p0           R5 State: up	Subdisks:     4	Size:       1392 GB

8 subdisks:
S temp.p0.s0            State: up	D: t1           Size:       1535 MB
S temp.p0.s1            State: up	D: t2           Size:       1535 MB
S temp.p0.s2            State: up	D: t3           Size:       1535 MB
S temp.p0.s3            State: up	D: t4           Size:       1535 MB
S raid5.p0.s0           State: up	D: r1           Size:        464 GB
S raid5.p0.s1           State: up	D: r2           Size:        464 GB
S raid5.p0.s2           State: up	D: r3           Size:        464 GB
S raid5.p0.s3           State: up	D: r4           Size:        464 GB

I used this article to set things up:
http://www.schmut.com/howto/freebsd-software-raid-howto

But you may also want to look into using ZFS. ZFS support was brand new when I set this up and I didn't feel comfortable. I might change it in the near future though.
 
Voltar said:
I've been running FreeBSD for a while now, and finally want to venture into using RAID with FreeBSD. I want to add a RAID 5 array to my FreeBSD server, and can't exactly afford a hardware controller at the moment. After a bit of Google'ing I've found info on geom raid5, and it seems to be used in FreeNAS so I'm guessing it has a decent rep.

Anyone have any personal experience with geom raid5? My array is going to be comprised of three 250 GB WD SATA II drives that I have laying around, the box is a AMD x2 3800+ with 4 GB RAM, mainly used as my development box, git/svn/cvs, ftp, file server, among other things. Data integrity is a concern as this will hold a lot of my work and backups.

Any thoughts or suggestions? Thanks in advance.

FreeNAS uses graid5 patches, so you can addd these pathes to FreeBSD and use it as you would do with FreeNAS, I havent used it, but check lists.freebsd.org for opinions on that.

You can alsu try ZFS with RAID-Z (equivalent of RAID 5), especially with that amount of RAM (2GB for ZFS is sufficent, 4GB is even better).
 
SirDice said:
I'm using gvinum with RAID5 (4*500GB)..

<snip>

I used this article to set things up:
http://www.schmut.com/howto/freebsd-software-raid-howto

But you may also want to look into using ZFS. ZFS support was brand new when I set this up and I didn't feel comfortable. I might change it in the near future though.

Thanks, I actually read that article, came up as the first or second result. Only thing that made me double take on it was that the writer mentioned gvinum to be unstable.

vermaden said:
FreeNAS uses graid5 patches, so you can addd these pathes to FreeBSD and use it as you would do with FreeNAS, I havent used it, but check lists.freebsd.org for opinions on that.

You can alsu try ZFS with RAID-Z (equivalent of RAID 5), especially with that amount of RAM (2GB for ZFS is sufficent, 4GB is even better).

Nice to know about the patches, I hadn't dug that far yet. I did come across ZFS a few times now, looks promising. After reading a bit more I saw it was only available for i386 originally (announcement wasn't too long ago it seems) but the wiki page on ZFS does show amd64 support (Forgot to mention that in the OP).
 
Voltar said:
Nice to know about the patches, I hadn't dug that far yet. I did come across ZFS a few times now, looks promising. After reading a bit more I saw it was only available for i386 originally (announcement wasn't too long ago it seems) but the wiki page on ZFS does show amd64 support (Forgot to mention that in the OP).

amd64 is generally preferred for ZFS.
 
vermaden said:
amd64 is generally preferred for ZFS.

That's good to hear.

My first attempt with ZFS didn't go too well, but gonna give it another shot. Thinking of migrating my /usr to the raid-z in the end now, that way everything important will have redundancy. The more I read about ZFS the more I like it, hopefully it'll scream like my RAID5/XFS does on Linux.
 
I suggest just using /usr/home/ in a raid 5. Pretty much everything else in /usr is easily installed if it gets nuked for some reason.

My server boots from a single IDE drive, you could use a mirror for that but i didn't bother with it. My swap is spread out on the 4 SATA disks, /tmp is a striped set (speed is more important then redundancy) and /storage raid5.
 
I know this is going to sound like a n00b question, but is there a way to have a software on / ? I've decided to use jails to replace a few boxes that I have so have a few extra hard drives to play around with.

I was thinking of...

/ (RAID1)
swap (no RAID)
/tmp (RAID 0 or 10)
/usr (RAID 10)
/usr/home (RAID-Z w/ ZFS)
/var (RAID 10)


All the important stuff, jails, repos, data, etc will be stored in /usr/home. Any new suggestions?

Also, what kind of throughput does everyone see with software RAID and/or ZFS?
 
how is raid5 from gvinum going ?

as it never reached the tree (AFAIK), I never got brave enough to run it ...

none
 
Rather starting a new thread, I thought I'd continue this one to keep it all together.

I've read Grog's original docs on vinum and a lot of fairly recent stuff on gvinum including the referenced piece.

I'm using a 32-bit system (which rules out ZFS) with 8 x 1TB drives. I can only manage 4.7G of useable formatted space, which I think is pretty poor.

Has anyone used gvinum with RAID5 configuration in a satisfactory way? Any comments welcome.

Thanks.
 
Voltar said:
That's good to hear.

My first attempt with ZFS didn't go too well, but gonna give it another shot. Thinking of migrating my /usr to the raid-z in the end now, that way everything important will have redundancy. The more I read about ZFS the more I like it, hopefully it'll scream like my RAID5/XFS does on Linux.

For ZFS boxes, until booting ZFS is stable and usable by all, I'd recommend leaving / and /usr on non-ZFS storage. That way, if you need to boot to single-user mode, you still have access to the full FreeBSD OS for troubleshooting. Putting /usr on ZFS can cause problems if you need to boot to single-user mode and you can't import the pool.

Use gmirror for / and /usr, and put /usr/local, /home, /var, /tmp, /usr/src, /usr/ports, /usr/obj onto ZFS.
 
kbw said:
Correction, I get 4.7TB of RAID5 space formatted from 8 x 1TB physical space.

I'm using 7.2 (i386).

Ouch! You should have closer to 7 TB of usable space with a RAID 5 across 8 disks. RAID 5 uses (X-1)*size space for X drives. Either it's not using the whole disks, or something went wrong.
 
Voltar said:
I know this is going to sound like a n00b question, but is there a way to have a software on / ? I've decided to use jails to replace a few boxes that I have so have a few extra hard drives to play around with.

I was thinking of...

/ (RAID1)
swap (no RAID)
/tmp (RAID 0 or 10)
/usr (RAID 10)
/usr/home (RAID-Z w/ ZFS)
/var (RAID 10)

Way over complicated. Simplify it:

* create a gmirror using 2 drives (or slices) for / and /usr
* create swap partitions on the same drives (or slices) as above
* put everything else into a ZFS pool using raidz or raidz2
* create ZFS filesystems for /usr/src, /usr/obj, /usr/ports, /usr/local, /home, /var, /tmp

Done. Now you can also add compression to /usr/src and /usr/ports to save space. :)

Note: don't use more than 8 disks for a single raidz1/raidz2 vdev. If you have more than 8 disks, split them up into smaller vdevs. The beauty of pooled storage like ZFS is that you can add as many vdevs as you want, and it all be available to the pool.

Also, what kind of throughput does everyone see with software RAID and/or ZFS?

Once you start using pooled storage (ZFS), you'll find it very hard to go back to figuring out how to partition disks for different uses. :D
 
kbw said:
I'm using a 32-bit system (which rules out ZFS)

ZFS works best on 64-bit systems with lots of RAM.

However, ZFS works just fine on 32-bit systems, especially with 2 GB or more of RAM. But it also works just fine on 32-bit systems with as little as 768 MB of RAM. It just requires more fine-tuning.
 
This is just a courtesy reply. I'll have another post after the memory upgrade.

I'm now using ZFS on my physical 8 x 1TB disks on i386 1.5GB RAM. And all looked well, I was seeing 7.2TB of formatted space.

On copying data from my backup (a ZFS amd64 host) over NFS, it crashed after copying 79GB. The amd64 host is fine.

The copy was the only operation on the two boxes. I have property set copies=2.

I've ordered 3G of RAM (the maximum the old thing can take) and I'll resume when it arrives.
 
Voltar said:
Also, what kind of throughput does everyone see with software RAID and/or ZFS?

pool comprised of 3 vdevs, each vdev is an 8-drive raidz2, using 500 GB SATA harddrives, 64-bit FreeBSD 7.2 w/8 GB RAM and 4 CPU cores.

iozone run as follows:
# iozone -M -e -+u -T -t 128 -S 4096 -L 64 -r 4k -s 40g -i 0 -i 1 -i 2 -i 8 -+p 70 -C
gives a sustained write throughput of 350 MBytes/sec (as shown by snmpd) which breaks down to ~15 MBytes/sec per drive (as shown by gstat). (Didn't wait for it to finish to get the read speeds.)

Tweaking the iozone command a bit:
# iozone -M -e -+u -T -t 128 -r 128k -s 4g -i 0 -i 1 -i 2 -i 8 -+p 70 -C
gives just over 400 MBytes/sec sustained write throughput, or just under 20 MBytes/sec per drive.


pool comprised of 1 vdevs, which is a 3-drive raidz, using 120 GB SATA harddrives, 32-bit FreeBSD 7.1 w/2 GB RAM and 1 CPU core (HTT enabled).

iozone run as:
# iozone -M -e -+u -T -t 32 -r 128k -s 40960 -i 0 -i 1 -i 2 -i 8 -+p 70 -C
gives 18 MBytes/sec per drive of write throughput (as shown by gstat). And iozone says it writes at 30 MBytes/sec, re-writes at 52 MBytes/sec, reads at 3.5 GBytes/sec, with a mixed workload of 63 MBytes/sec.

Overall, I'd have to say ZFS is good. :)
 
kbw said:
This is just a courtesy reply. I'll have another post after the memory upgrade.

I'm now using ZFS on my physical 8 x 1TB disks on i386 1.5GB RAM. And all looked well, I was seeing 7.2TB of formatted space.

On copying data from my backup (a ZFS amd64 host) over NFS, it crashed after copying 79GB. The amd64 host is fine.

On a 32-bit system, with only 1.5 GB of RAM, you will need to tune /boot/loader.conf to limit the size of vm.kmem_max and vfs.zfs.arc_max. Otherwise, ZFS will try to use all the RAM it can grab, and crash the box with out-of-memory errors.

The values you use will depend on the system and the workload. Here's what I use on my home system (3.0 GHz P4, 2 GB RAM):
Code:
vm.kmem_size_max="1G"
vfs.zfs.arc_max="256M"

That tells the kernel to use up to 1 GB for kernel memory, leaving at least 1 GB for user apps; and tells ZFS to only use 256 MB of kernel memory for the ARC.
 
  • Thanks
Reactions: kbw
I just thought I'd check in.

I've tweaked the kernel memory limit as instructed, and now ZFS seems fine on a i386 with 1.5G of memory on 7.2 after being used for a couple of weeks. It's as stable as my amd64.

Thanks for your help.
 
I've been using GEOM_RAID5 for about 2 months now on my media server.
I've got 6 * 1TB drives in a RAID5 array. Its pretty much been up for 2 months straight, except for kernel updates.
GRAID5 is also used in FreeNAS. Its much lighter weight than ZFS and as far as I can tell, seems to be just as reliable. The big plus for me was that I didn't have to put 4 gigs on a box that's just a fileserver.
 
derwood said:
Its much lighter weight than ZFS and as far as I can tell, seems to be just as reliable. The big plus for me was that I didn't have to put 4 gigs on a box that's just a fileserver.

(Sigh, it'll be such a nice day when people stop resorting to FUD.)

You don't *need* 4 GB of RAM to run ZFS. People have run it on 32-bit systems with as little as 768 MB of RAM. People run it on laptops.

The ideal setup is a 64-bit system with 4 GB of RAM ... but that's not the minimum requirement.

I run it on my home media server which is just a 32-bit P4 @ 3 GHz with 2 GB of RAM. Runs just fine. You just need to do a bit of tuning of /boot/loader.conf.

Note: the more RAM you can put into a fileserver, the better things will run, as all "free" RAM will be used as a filesystem cache, thus speeding things up immensely.
 
Well, I thought I would chip in here and say that I've had fantastic experiences thus far with ZFS, and now that I've upgraded my fileserver to FreeBSD 7.2 it looks like it doesn't require a lot of manual tuning as it did before (Per ZFSTuningGuide)

It also seems that I can't just add a single drive into my RAIDZ pool? Trying to figure that one out currently as I just got a few RMA'd drives back from manufacturers.
 
You can't expand a raidz vdev (ie turn a 3-drive raidz vdev into a 4-drive raidz vdev).

However, you can replace the individual drives in the raidz vdev with larger drives, to expand the total amount of storage space in the pool. You have to replace 1 drive at a time, and let it finish the resilver for each drive in turn. After all the drives in the raidz vdev are replaced, you drop to single-user mode and do a zpool export, and then zpool import. After that, all the extra space will be available in the pool.

There are conflicting reports online on whether or not you can mix vdev types in a single pool (ie have a raidz1 vdev, a mirror vdev, a single-drive vdev, all in the same pool). Some sites say you can't, other show that you can. Haven't tested this yet, as all our systems have multiple, identical, raidz vdevs.
 
Thanks phoenix, I might just copy the data off and recreate the RAIDZ with the four drives since I don't plan on getting any more 250 GB drives.
 
phoenix said:
(Sigh, it'll be such a nice day when people stop resorting to FUD.)

You don't *need* 4 GB of RAM to run ZFS. People have run it on 32-bit systems with as little as 768 MB of RAM. People run it on laptops.

The ideal setup is a 64-bit system with 4 GB of RAM ... but that's not the minimum requirement.

I run it on my home media server which is just a 32-bit P4 @ 3 GHz with 2 GB of RAM. Runs just fine. You just need to do a bit of tuning of /boot/loader.conf.

Note: the more RAM you can put into a fileserver, the better things will run, as all "free" RAM will be used as a filesystem cache, thus speeding things up immensely.

I'm putting together a x386 file server containing 4GB RAM using FreeBSD 9.0-RELEASE and Samba 3.5.6. I've built a ZFS array using three 250GB SATA drives. The OS drive is formatted UFS and has all the file systems except for /data which resides on the ZFS array.

The issue is that after a day or two of running, the server will cough up and reboot. Sometimes it'll crash and dump data onto the console. Other times it'll simply reboot without warning. Usually it'll crash/reboot while I'm doing a huge file copying to the server which has been ruuning for a day or two.

I've read up on the ZFSTuningGuide on the FreeBSD wiki and adjusted the following files:

/etc/sysctl.conf:
Code:
aries# less /etc/sysctl.conf
# $FreeBSD: release/9.0.0/etc/sysctl.conf 112200 2003-03-13 18:43:50Z mux $
#
#  This file is read when going to multi-user and its contents piped thru
#  ``sysctl'' to adjust kernel values.  ``man 5 sysctl.conf'' for details.
#

# Uncomment this to prevent users from seeing information about processes that
# are being run under another UID.
#security.bsd.see_other_uids=0
kern.maxvnodes=400000
vfs.zfs.write_limit_override=268435456

/boot/loader.conf:
Code:
aries# less /boot/loader.conf
vfs.zfs.prefetch_disable="1"
vm.kmem_size="512M"
vm.kmem_size_max="1G"
vfs.zfs.arc_max="256M"
#vfs.zfs.vdev.cache.size="10M"
zfs_load="YES"
#vfs.zfs.txg.timeout="5"

I've installed ZFS on a AMD64 FreeBSD server with good success so I'm pretty sure it's related to using ZFS on a 32-bit machine. Phoenix claims above that he has good success running on 32-bit machines with 2GB. Exactly what kind of additional tuning do I need to perform in order to have a stable 32-bit ZFS/Samba file server? I'm expecting to have hundreds of files being opened/closed each day on the server running in a small business environment and I'd like to ensure this server runs uninterrupted for months on end and even for years.

Do I need to recompile the kernel instead of using the stock 9.0-RELEASE kernel and adjust the KVA_PAGES parameter to a larger value? Would this alleviate the crashes?

Or do I need to consider using gvinum raid5 capability instead?

~Doug
 
Back
Top