Solved New FreeBSD ZFS installation question

Hello all!

I am in the process of rebuilding a server with the following specs:
Server: Dell R610
Ram: 24Gb
Storage: 6x 146GB sas 6gbs

The server will be running FreeBSD 10.2 and will maily use for web hosting.
The plan is to run 3/4jail (mysql server, web server, Samba for office data) using iocage.

Question 1: Which will suit me best between raidz2 and raidz3?
Question 2: I understand That I need to run JBOD but not quite sure on how to do that..

I also read on a website that
Code:
You need to setup each disk as a separate RAID0 array in the LSI raid controller.
This is similar to a JBOD mode, but this method allows us to use all the caching and efficiency algorithms the LSI card can offer.
Does the MegaCLI get installed and configured AFTER FreeBSD has been installed?

Thank you all in advance:)
 
I'd say RAID-Z2. Z3 is more useful when you have lots of huge (2/3TB+) disks. There's also the option of RAID10 (3 mirrored sets).

If the disks don't just show up in FreeBSD by default, you'll need to create logical devices in the RAID controller. Usually you can do this in the controller BIOS as the machine boots. Sometimes you have to create 6 RAID0 arrays and sometimes you can just export each disk as a JBOD. It depends on the controller but you should be able to work it out.

You should be able to install MegaCLI software after the OS is installed, assuming it successfully finds and supports the controller.
 
What kind of controller is used in the server?

PERC H200 (6Gb/s)
PERC H700 (6Gb/s) with 512MB battery-backed cache; 512MB, 1GB Non-Volatile battery-backed cache
SAS 6/iR
PERC 6/i with 256MB battery-backed cache

If there is a way to flash the controller with a IT-Firmware I would go that way.
ZFS performs best with direct attached disk.
 
Hi User23
How can I give you the info as I haven't got access to the box righ now.. Is there a command I could type?
sysinfo is telling me I have MegaRAID SAS 1078..

I worked out that I could create the raid using the following MegaCLI commands
Code:
# Set slots 0-5 to 6 individual RAID0 volumes.
i=0; while [ $i -le 5 ] ; do MegaCli64 -cfgldadd -r0[32:${i}] WB RA Cached CachedBadBBU -strpsz512 -a0 -NoLog ; i=`expr $i + 1`; done

# Create a RAID-Z2 (RAID6) ZFS volume out of 6 drives called "tank".
zpool create tank raidz2 mfid0 mfid1 mfid2 mfid3 mfid4 mfid5
I forgot to mention that I will be installing FreeBSD via an adpated script version of Calomel from the usb..

How could I integrete this 2 lines of code in it?
 
It should be a PERC 6/i. And it looks like there is no way to flash a IT firmware, so you have to work with the single disk RAID0's.
 
Question 2: I understand That I need to run JBOD but not quite sure on how to do that..
[This is from memory, so may be slightly off.]

You'll get a console message like "Press Control-R to enter the configuration utility". You'll normally end up on a teal-colored screen with lots of menus. Arrow up to "Controller 0". Press F2 for the menu. If the disks or controller have been used previously, I would select "Clear config" first. Now select "Create new VD". Leave the RAID mode at the default, RAID0. Arrow down and select the first drive with the spacebar. Arrow over to the Volume Name box. Enter something that describes the physical location, like "Slot0". Arrow down to the Advanced Options box and check it. Arrow down and check Initialize and then arrow to the OK box and press enter. Wait for the "initialization complete" message (it is fast). Repeat for each of your physical drives. Exit the utility and do control-alt-delete to reboot (it will remind you).

Most of the PERC controllers you will run into in an R610 will be mfi(4) devices. The driver is good and solid. When you boot, you'll see messages like this:

Code:
mfi0: <Dell PERC H700 Integrated> port 0xfc00-0xfcff mem 0xdf1bc000-0xdf1bffff,0xdf1c0000-0xdf1fffff irq 32 at device 0.0 on pci2
mfi0: Using MSI
mfi0: Megaraid SAS driver Ver 4.23 
mfi0: FW MaxCmds = 1008, limiting to 128
mfid0 on mfi0
mfid0: 139392MB (285474816 sectors) RAID volume 'SysDisk' is optimal
mfid1 on mfi0
mfid1: 1907200MB (3905945600 sectors) RAID volume 'Slot0' is optimal
mfid2 on mfi0
mfid2: 1907200MB (3905945600 sectors) RAID volume 'Slot1' is optimal
mfid3 on mfi0
mfid3: 1907200MB (3905945600 sectors) RAID volume 'Slot2' is optimal
mfid4 on mfi0
mfid4: 1907200MB (3905945600 sectors) RAID volume 'Slot3' is optimal
mfid5 on mfi0
mfid5: 1907200MB (3905945600 sectors) RAID volume 'Slot4' is optimal
mfid6 on mfi0
mfid6: 1907200MB (3905945600 sectors) RAID volume 'Slot5' is optimal
mfid7 on mfi0
mfid7: 1907200MB (3905945600 sectors) RAID volume 'Slot6' is optimal
mfid8 on mfi0
mfid8: 1907200MB (3905945600 sectors) RAID volume 'Slot7' is optimal
mfid9 on mfi0
mfid9: 1907200MB (3905945600 sectors) RAID volume 'Slot8' is optimal
mfid10 on mfi0
mfid10: 1907200MB (3905945600 sectors) RAID volume 'Slot9' is optimal
mfid11 on mfi0
mfid11: 1907200MB (3905945600 sectors) RAID volume 'Slot10' is optimal
mfid12 on mfi0
mfid12: 1907200MB (3905945600 sectors) RAID volume 'Slot11' is optimal
If you want to run sysutils/smartmontools you will want to load the mfip(4) (no man page, but trust me) driver, which will make the drives appear as /dev/passN devices for smartd. If you are using SATA drives (as opposed to SAS), you will need to add "-d sat" to each entry in /usr/local/etc/smartd.conf or it will complain.

Code:
mfip0: <SCSI Passthrough Bus> on mfi0
pass0 at mfi0 bus 0 scbus0 target 0 lun 0
pass1 at mfi0 bus 0 scbus0 target 1 lun 0
pass2 at mfi0 bus 0 scbus0 target 2 lun 0
pass3 at mfi0 bus 0 scbus0 target 3 lun 0
pass4 at mfi0 bus 0 scbus0 target 4 lun 0
pass5 at mfi0 bus 0 scbus0 target 5 lun 0
pass6 at mfi0 bus 0 scbus0 target 6 lun 0
pass7 at mfi0 bus 0 scbus0 target 7 lun 0
pass8 at mfi0 bus 0 scbus0 target 8 lun 0
pass9 at mfi0 bus 0 scbus0 target 9 lun 0
pass10 at mfi0 bus 0 scbus0 target 10 lun 0
pass11 at mfi0 bus 0 scbus0 target 11 lun 0
pass12 at mfi0 bus 0 scbus0 target 12 lun 0
pass13 at mfi0 bus 0 scbus0 target 13 lun 0
The volumes and controller can be managed with the mfiutil(8) command, which is part of the base operating system. At a minimum, you will want to do # mfiutil cache N enable to enable caching (N is the volume number).
 
Terry_Kennedy thank you very much for such clear explaination:)
I followed the instruction and setting up 6 RAID-0 LDs was a breath..
Just one question though..
Arrow down to the Advanced Options box and check it
Do I leave the fields to default?
Code:
Element size:64kb
Read Policy: No Read Ah
Write Policy: Write Back

If yes, then I can now move onto installing FreeBSD and play with MegaCLI
 
Do I leave the fields to default?
Code:
Element size:64kb
Read Policy: No Read Ah
Write Policy: Write Back
Element size shouldn't matter on a single disk volume. ZFS will probably do a better job of read-ahead than the controller will, since it knows more about how the data is used. Write back is fine as long as your have a working battery (# mfiutil show battery) on the controller. These will normally go to write-through (no deferred writes to disk) by themselves if the battery is missing or bad.
 
Hi Terry_Kennedy,

Do you think i need to bother running this two commance with MegaCLI?
Code:
# Set slots 0-5 to 6 individual RAID0 volumes.
i=0; while [ $i -le 5 ] ; do MegaCli64 -cfgldadd -r0[32:${i}] WB RA Cached CachedBadBBU -strpsz512 -a0 -NoLog ; i=`expr $i + 1`; done

# Create a RAID-Z2 (RAID6) ZFS volume out of 6 drives called "tank".
zpool create tank raidz2 mfid0 mfid1 mfid2 mfid3 mfid4 mfid5

I a, stuggling in understanding how to I then install FreeBSD in the new zpool tank and will it delete the curent partition
 
Hi Terry_Kennedy,

Do you think i need to bother running this two commance with MegaCLI?
Code:
# Set slots 0-5 to 6 individual RAID0 volumes.
i=0; while [ $i -le 5 ] ; do MegaCli64 -cfgldadd -r0[32:${i}] WB RA Cached CachedBadBBU -strpsz512 -a0 -NoLog ; i=`expr $i + 1`; done

# Create a RAID-Z2 (RAID6) ZFS volume out of 6 drives called "tank".
zpool create tank raidz2 mfid0 mfid1 mfid2 mfid3 mfid4 mfid5

I a, stuggling in understanding how to I then install FreeBSD in the new zpool tank and will it delete the curent partition
No to the first. You created all of the volumes in the controller BIOS and you should already have the assorted mfidN devices visible in FreeBSD.

For the second, I don't use the FreeBSD installer (haven't since FreeBSD 5, actually) so I don't know how you'd create a ZFS pool with it. But you can always select the "Live CD" option (this should be a menu choice in the standard installer) and manually create the pool. Here is how I do it (lightly edited to match your controller and drive configuration):

Code:
# Label the drives so we can refer to them with device-independent labels
glabel label slot0 mfid0
glabel label slot1 mfid1
glabel label slot2 mfid2
glabel label slot3 mfid3
glabel label slot4 mfid4
glabel label slot5 mfid5
# Create a RAIDZ2 using those labels
zpool create -f tank raidz2 label/slot0 label/slot1 label/slot2 label/slot33 label/slot4 label/slot5
# Set some options
# DO NOT USE DEDUP - makes scrubs take forever!
#zfs set dedup=on tank
zfs set compression=on tank
At this point, you should be able to go back to the installer (a reboot may be needed) and proceed to install, creating needed partitions in the already-existing ZFS pool. There is probably something you need to tell the installer in order to be able to boot from ZFS, as that is a (relatively) new feature in FreeBSD.

Don't bother with a spare drive and ZFS autoreplace - it isn't connected to anything in non-Solaris operating systems - it needs an outboard utility to notice that replacement is needed. It should be possible to hack something with devd(8) but it isn't really needed on a small pool that can have 2 drives fail before losing data.
 
During install, you are giving the option to install to ZFS root. You are then giving various options, such as what disks to include, whether you want a mirror or something different.

Choosing to install on ZFS adds a line to /boot/loader.conf (or possibly two lines, I don't remember off the top of my head). So you may not have to do the liveCD steps.

Terry_Kennedy, you mention you don't use the installer. How do you usually install? (Although I don't want to derail the thread, I am curious)
 
Last edited:
During install, you are giving the option to install to ZFS root. You are then giving various options, such as what disks to include, whether you want a mirror or something different.

Choosing to install on ZFS adds a line to /boot/loader.conf (or possibly two lines, I don't remember off the top of my head. So you may not have to do the liveCD steps.
As I mentioned, I don't use the installer. But I doubt it gives you the ability to (for example) create labels and then use those labels when creating the ZFS pool. I find that labeling the drives in this manner helps a lot when one of them starts reporting errors or you need to do something to the physical drive. The mfidN unit numbers don't always correspond with slot numbers, for example. On one of my system configurations, mfid0 is a mirror of the disks in internal slots 12 and 13, and mfid1 is the disk in slot 0 ... mfid12 is the disk in slot 11.
Terry_Kennedy, you mention you don't use the installer. How do you usually install? (Although I don't want to derail the thread, I am curious)
I boot one of the distribution discs (normally DVD1 or LIVECD) and escape to the shell. Depending on what type of system I'm setting up, I'll either dd the first few MB of a disc image (to use partitioning from a template file) or manually create partitions. I then newfs the partitions and restore from one of a number of different template images. On first boot, the restored image runs a locally-written script called "clone" (imaginative, huh?) in single user mode which removes any old log files, etc. and prompts for hostname, IP info (with default to the netmask and gateway in the template image) and does some final housecleaning (removeing restoresymtable files, chmod-ing /tmp, etc.)

I can get a new system deployed within 15 minutes of unboxing and first power-on (excluding any vendor firmware updates).

This only works if you're deploying large numbers of systems from a small set of templates, though - I wouldn't suggest it for one-off use.
 
Thank you all for all your very helpful advise.
I managed to get my system up and running correctly thank to you guys :)
 
Last edited by a moderator:
Glad to hear it, thanks for the update. It's always good to mention what steps you took, as there were a few possibilities given, this way, the next person might be helped by it. :)
 
Just to add, I don't use the installer usually either. Might just be me but I really dislike that it seems to create a dozen or so datasets just for the base system.

I usually follow steps very similar to the 'ZFS madness' howto on here.
I have a single poolname/ROOT/default dataset that contains the base system. Then I will create standalone datasets, such poolname/data or poolname/web, etc, after installation to store actual data depending on the use of the system. (All my systems are servers though).

I personally don't like glabel either. It's completely non-standard, and while it doesn't usually cause an issue, I don't like having /dev/diskpX and /dev/label/labelname, one of which is a sector smaller than the other. I generally always GPT partition the disks and use GPT labels. You still get two references to each partition (the actual partition and the label), but at least they are identical. If I'm booting off the pool, the disks need to be partitioned anyway.

Getting used to installing this way has been useful for me regarding systems restores, as I send poolname/ROOT/default from every system to a backup server. If I need to restore, I follow the exact same installation steps, but when it comes to creating the root dataset and extracting FreeBSD, I simply just pull in the existing dataset from backup using send/recv instead.
 
It's always good to mention what steps you took, as there were a few possibilities given, this way, the next person might be helped by it. :)
Here is what I did
1. I setup 6 RAID-0 LDs following Terry_Kennedy instruction
At boot you'll get a console message like "Press Control-R to enter the configuration utility". You'll normally end up on a teal-colored screen with lots of menus. Arrow up to "Controller 0". Press F2 for the menu. If the disks or controller have been used previously, I would select "Clear config" first. Now select "Create new VD". Leave the RAID mode at the default, RAID0. Arrow down and select the first drive with the spacebar. Arrow over to the Volume Name box. Enter something that describes the physical location, like "Slot0". Arrow down to the Advanced Options box and check it. Arrow down and check Initialize and then arrow to the OK box and press enter. Wait for the "initialization complete" message (it is fast). Repeat for each of your physical drives. Exit the utility and do control-alt-delete to reboot (it will remind you).
2. Installing and setting zfs
During install, you are giving the option to install to ZFS root. You are then giving various options, such as what disks to include, whether you want a mirror or something different.
I choose to install on ZFS and from there I selected raiz2 and selected the 6 disks I setup previously as individual raid0

Hope this will help someone else..

PS: does anyone know why spelling mistake are not hilighted anymore in the browser?
 
Back
Top