RAID is so confusing? Please help me

Ok I have succesfully ran freebsd on my desktop for a while now. I am almost ready to take the plunge a move my server to it. Well I would be but software raid on freebsd scares the crap out of me.

First my current linux setup. 3 disks all of the same size 36.4GB
Important docs are kept in /srv

mounted at /boot md1=/dev/sda1,/dev/sdb1,/dev/sdc1 raid1 32MB
mounted at / md2=/dev/sda2,/dev/sdb2,/dev/sdc2 raid5 majority
mounted at swap md3=/dev/sda3,/dev/sdb3,/dev/sdc3 raid0 1GB swap

Just throwing some ideas round here, your thoughts are very very welcome.

/boot is only raid 1 because linux cannot boot from raid5 how does this fair in freebsd
I guess you have to pair slices together in freebsd
I guess I would need more raid 1 in freebsd, i.e / leaving /srv to go into raid 5
I have heard/read that you need to recompile the kernel for raid support (having never compiled a bsd kernel this somewhat scares me) this maybe just out of date docs/threads etc
If so how on earth do you use raid at installation time.
If you have to apply raid after installation I guess I set it up on one disk then create the mirrors on the other two, but how can this work with raid5 although I guess this can be setup entirely after installation.

Sorry for the bable and awful grammer, hope you can see where I am coming from

Thanks Guys

The latest FreeBSD newb (loving it so far)
 
Additional - I know using linux as comparison is like using apples and pears, but it's the best I got to show you guys what I'm looking for.

Where is the post edit button on theses forums?
 
gazj said:
/boot is only raid 1 because linux cannot boot from raid5 how does this fair in freebsd
AFAIK Freebsd can boot from UFS (standard filesystem), gvinum volume (raid0, raid1 and raid5) and ZFS.

I guess I would need more raid 1 in freebsd, i.e / leaving /srv to go into raid 5
I normally don't use RAID on a system volume, not at home at least. But if I needed to it would be raid1 (mirror). All data (/usr/home and in your case /srv) on raid5.

I have heard/read that you need to recompile the kernel for raid support (having never compiled a bsd kernel this somewhat scares me) this maybe just out of date docs/threads etc
No need to recompile the kernel, it should work fine with GENERIC. And recompiling the kernel is easy :e

If so how on earth do you use raid at installation time.
If you have to apply raid after installation I guess I set it up on one disk then create the mirrors on the other two, but how can this work with raid5 although I guess this can be setup entirely after installation.
If you want to use anything other then UFS I'm afraid you're usually required to do an install by hand. Sysinstall is rather limited and it's showing it's age :(

Installing Freebsd from scratch isn't as hard as you think it is though. Once the basic OS is up and running everything else is a piece of cake.
 
RAID on FreeBSD is a lot simpler than on Linux. The whole md vs lvm stuff is non-existent on FreeBSD. Everything is handled via GEOM.

With 3 identical disks, you can use graid3(8). Just create a single array and be done with it.

Or, you can install to a USB stick or CompactFlash, and use the 3 disks to create a raidz1 vdev with ZFS.

Or, you can install to a USB stick (or CF) and use gmirror to mirror it to another USB stick. And use the 3 disks for ZFS.

Or, you can make things super complicated by creating 3 slices on each disk (s1 for root fs, s2 for swap, s3 for data). Then use gmirror(8) to create a 3-way mirror of s1 on each disk, and use graid3(8) or ZFS on s3 of each disk.

Or you can just use gmirror to create a 3-way mirror across all three disks (lost 2 disks worth of space, but can lose 2 disks without losing any data).

Or ... :)
 
phoenix said:
Or, you can make things super complicated by creating 3 slices on each disk (s1 for root fs, s2 for swap, s3 for data). Then use gmirror(8) to create a 3-way mirror of s1 on each disk, and use graid3(8) or ZFS on s3 of each disk.
Yeah, something like mine:
Code:
root@molly:~#gvinum list
8 drives:
D t1                    State: up	/dev/ad4s1d	A: 0/1535 MB (0%)
D t2                    State: up	/dev/ad5s1d	A: 0/1535 MB (0%)
D t3                    State: up	/dev/ad6s1d	A: 0/1535 MB (0%)
D t4                    State: up	/dev/ad7s1d	A: 0/1535 MB (0%)
D r1                    State: up	/dev/ad4s1e	A: 0/475147 MB (0%)
D r2                    State: up	/dev/ad5s1e	A: 0/475147 MB (0%)
D r3                    State: up	/dev/ad6s1e	A: 0/475147 MB (0%)
D r4                    State: up	/dev/ad7s1e	A: 0/475147 MB (0%)

2 volumes:
V temp                  State: up	Plexes:       1	Size:       6142 MB
V raid5                 State: up	Plexes:       1	Size:       1392 GB

2 plexes:
P temp.p0             S State: up	Subdisks:     4	Size:       6142 MB
P raid5.p0           R5 State: up	Subdisks:     4	Size:       1392 GB

8 subdisks:
S temp.p0.s0            State: up	D: t1           Size:       1535 MB
S temp.p0.s1            State: up	D: t2           Size:       1535 MB
S temp.p0.s2            State: up	D: t3           Size:       1535 MB
S temp.p0.s3            State: up	D: t4           Size:       1535 MB
S raid5.p0.s0           State: up	D: r1           Size:        464 GB
S raid5.p0.s1           State: up	D: r2           Size:        464 GB
S raid5.p0.s2           State: up	D: r3           Size:        464 GB
S raid5.p0.s3           State: up	D: r4           Size:        464 GB
root@molly:~#swapinfo 
Device          1K-blocks     Used    Avail Capacity
/dev/ad4s1b        262144       84   262060     0%
/dev/ad5s1b        262144      108   262036     0%
/dev/ad6s1b        262144       60   262084     0%
/dev/ad7s1b        262144       88   262056     0%
Total             1048576      340  1048236     0%
root@molly:~#

4 500GB disks with one slice and 3 partitions (b,d and e). Swap on all 4 drives, one raid0 /tmp and one raid5 /storage.

:stud
 
Thank you all for your answers

SirDice - Your post fills me with confidence, sounds quite easy, have you got an install guide for installing freebsd in this way?

Peonix and Sirdice(2nd post) - Things are starting to sound way complicated, how has a USB Stick got thrown into the mix and that gvinum list looks so hard to work out what is going on, but I am new to all this, maybe mdadms output was this difficult from the start.

Do drives get bundled into a new device like in linux, ie 3devices become mdx?
If so do they get referenced this way in /etc/fstab?

Sorry your answers have left me with more questions, thanks for all your help tho, appreciated
 
Oh and UFS ZFS whats the beef. Just a case of ext2>ext3>ext4 or more too it?

I know I should (actually I have) read the handbook, but geom vinum UFs/ZFS and all this other terminology is a lot to take in all at once.

While we here what does turning soft updates on/off do?
 
gazj said:
Do drives get bundled into a new device like in linux, ie 3devices become mdx?
If so do they get referenced this way in /etc/fstab?
Yes, like mine:

Code:
# Device                Mountpoint      FStype  Options         Dump    Pass#
/dev/ad4s1b             none            swap    sw              0       0
/dev/ad5s1b             none            swap    sw              0       0
/dev/ad6s1b             none            swap    sw              0       0
/dev/ad7s1b             none            swap    sw              0       0

/dev/gvinum/temp        /tmp            ufs     rw              2       2

/dev/gvinum/raid5       /storage        ufs     rw              2       5

(removed the irrelevant bits)

As for reading material, lets see..
http://www.freebsd.org/doc/en/articles/fbsd-from-scratch/article.html
There are also a few nice ones for ZFS
http://wiki.freebsd.org/ZFS
This is the one I used when I set mine up
http://www.schmut.com/howto/freebsd-software-raid-howto

The trick is to plan ahead. Don't start thinking about how to partition/split things and what goes where when you're staring at a black screen with a blinking prompt. Draw it out. I had mine on a piece of paper, volume names, partitions, slices, sizes. If you don't, you're likely to lose track of it all. I know I did when I first tried to 'organize' things :x
 
at a quick glance, they look like just the job. Will read properly tomorrow, gotta go to bed now unfortunately. stage 1,2,3 as in the gentoo way? From what I know Gentoo was inspired by FreeBSD especially the ports concept.

Thanks for all your help guys. I think I may be able to tackle this at the weekend. I'll report my luck/skill (maybe your skills rather than mine) here.
 
gazj said:
Oh and UFS ZFS whats the beef. Just a case of ext2>ext3>ext4 or more too it?

ufs is just a filesystem. zfs is much more. With zfs you get a powerful volume manager for free. Mirror, Raid6, Hot Spares, completely controller independent, (nearly) everything you like (downsizing of zpools won't work for example). Fantastic features like dedup are expected one day. You create a filesystem and you do not have to spend any thought about its size. And you can (and should and will!) create a new zfs partition instead of creating a directory because of its new purpose. UFS snapshots are a pain, ZFS snapshots are wonderful.

But, and there's a very huge "but", zfs on FreeBSD is not for everybody, even it is one of the highlights of FreeBSD 8.0-RELEASE and FreeBSD 7.3-RELEASE (zfs is no longer "experimental"). sysinstall (the installer of FreeBSD) has no idea about zfs. You are even not able to install on a simple gmirror(8). A single disk (maybe a hardware raid seen by FreeBSD as a single disk) with some free space is still the requirement for both current releases.

Next, zfs on i386 (in comparison to amd64) requires "tuning". You will have to rebuild your kernel (KVA_PAGES) and modify kernel values and the values highly depend on your personal setup (RAM, number of disks, size of pools, burst usage, etc.), you won't be able to find "the solution" without trying yourself. ZFS on amd64 requiress less tuning, but running zfs on amd64 with just 3 GByte RAM may still require tuning.

Next, you must be able to read and understand GPTZFSBoot. Try to install it in a virtual machine of your choice, VirtualBox works fine. If you fail, you should probably stick with UFS and the standard installer. But now you can try this with your preferred VM as well.

gazj said:
While we here what does turning soft updates on/off do?

You want to turn soft update off on / and on on all other ufs filesystems. You can change it later using tunefs(8). Using just one root filesystem for the whole system will result in a very bad performance, because soft updates are turned off then everywhere.
 
@gazj:

Tip: trying to equate every single thing in FreeBSD with every single thing in any Linux distro will actually confuse you way more than treating and learning FreeBSD as an independent operating system in its own right (which it is). Concentrate on the task and the documentation, and apply temporary amnesia to related concepts from Linux as much as possible.
 
@DutchDaemon: I am trying to do this but over 5 yrs of using linux it's quite hard to let go now. I even find myself trying to apply linux principles to windows machines at work. I know you are right though.

@vermaden: Another fantastic guide for me to read up on.

I will do vbox experiment this weekend on one or more of the examples in this post.
 
Here's another story to add insult to injury...

A few yrs. ago I went out to take the trash to the dumpster, where I discover a 2U rackmount
propped up against the dumpster. I realize the box itself is worth keeping, even if all the
hardware inside is burnt to a crisp. So I bring the box in and pop the top. Inside I find 4 brand new SCSI U160's, a TYAN 2 CPU MB, and a 1Mb stick of RAM. I stare at it for a moment, and realize the stick of RAM is in the wrong slot. I move it to the correct slot, apply power, and bodda-bing, bodda-boom, it springs to life. Well, I immediately shut 'er down, and replace the SONY DAT tape drive - OH, I forgot to mention that, didn't I; with a CDROM. So I can feed it a copy od FBSD-6.2. I then fire it back up and begin the install process. Given that the first U160 was on one (slower) channel, and the others were on channel B (full speed channel). I opted to install FBSD on the channel A drive (slow one) -
after all I had no idea yet what might happen in the long run. Well, 2 yrs. later, I'm running out of space, and I'm thinking it's time to break down and read what's currently available in GEOM. I'm looking to do something really crazy; you know,
like create a raid designed for space not redundancy. So I look to move the most
frequently used area on the system and convert it to... gasp, ... yes, RAID0.
So off I go, and turn the slice /usr into a RAID0 GEOM.
Now here comes the good part; I wrote the whole procedure down. Why? 'cause this is something you almost never have to re-visit on BSD boxen, 'cause it just keeps, working, and working, and... you get the picture. So here's my recipe - OH I should mention that this server has been used and abused for some 2.5 years now, and has never one bit of data - even after an abrupt power outage that killed power before the genset kicked in!
Recipe follows:
Code:
[B]GMIRROR: RAID0[/B]
Note: this assumes system is already installed and we will be making a stripe on additional drives leaving installed drive unstriped.
 o determine the device names - ad0xxx / da0xxx, etc...
 o load the geom stripe lib: kldload geom_stripe
 o make a mountpoint. for example; mounting a stripe from root (/): mkdir /stripe
 o create a stripe device for the new disks:
eg; gstripe label -v st0 /dev/da1 /dev/da2 /dev/da3 (remember step #1: determine device names?)

 o write label to disks: bsdlabel -wB /dev/stripe/st0
 o create system on the new device: newfs -U /dev/stripe/st0a
 o mount the stripe: mount /dev/stripe/st0a /stripe
 o add to fstab:
/dev/stripe/st0a    /stripe    ufs    rw    2    2
 o make sure GMIRROR get's loaded @boot time:
echo geom_stripe_load="YES" >>/boot/loader.conf
 o move data to new mount (stripe) - 2 steps:
 1) cd /oldmountpoint tar cf - . | (cd /newmountpoint; tar xf - )
drop to single user mode (reboot; boot -s)
 o fsck -f
 o reboot, and enjoy your new stripe!

Now that shouldn't take much longer than 10 - 15 minutes - and on a live system to-boot!

HTH

--Chris
 
Back
Top