Multiple pools and SAS/SATA

Hello,

I'm a beginner in FreeBSD and I've a question about the multiple ZFS pools (Sorry for my English, I'm French :p). I have installed a FreeBSD Server with ZFS and I have a large data storage (SAS) connected. I want to store my jails in the data storage which is linked to my server (it's redundant). I would like to do:

Code:
camcontrol devlist
gmultipath label -Av storage /dev/da0 /dev/da1
zpool create -o altroot=/mnt tank1 /dev/multipath/storage
zfs set checksum=flechtcher4 tank1
zfs create tank0/jails
zfs set mountpoint=/jails tank1
zfs unmount tank1

This is the right way to do it? My pool tank1 is it linked with my tank0 and still connected if I reboot my server?

Thanks
Regards
 
I am not sure why do you need tank0/jails file system. Simple directory would be enough to mount FS from the other pool to it. The rest seems viable to me. Just make sure to have geom_multipath kernel module loaded on boot.
 
mav@ said:
I am not sure why do you need tank0/jails file system. Simple directory would be enough to mount FS from the other pool to it.

Yes it's logical. The size of tank0 is 265Go and tank1 2To. I cannot put 2To in 265Go.

mav@ said:
Just make sure to have geom_multipath kernel module loaded on boot.

Good ;) I have to put geom_multipath_load=YES in /boot/loader.conf

Markand said:
Note that flechtcher4 checksum does not exist, perhaps you meant fletcher4?

Yes of course.

I have a second problem. I have done this:
camcontrol devlist | grep lun\ 0
gmultipath label -Av storage /dev/da0 /dev/da1
mkdir /jails
zpool create -o altroot=/jails tank1 /dev/multipath/storage
zfs set checksum=fletcher4 tank1
zfs set mountpoint=/jails tank1


When I look zfs list, I can see
tank1 106K 1.95To 31K /jails/jails

Why do I have two directories jails/ ?

Thanks
 
Whay do you mean disappears? Do you see multipath device? Does zpool status and zpool import show anything about it?
 
I don't see tank1 in the zfs list and zpool list/status.

I try zpool get all tank1
cannot open 'tank1': no such pool

But I don't have problem if I import tank1. It's very strange
 
Very strange that you say tank1 disappears. I'm assuming you must have zfs_load="YES" in /boot/loader.conf, as otherwise you shouldn't be able to see tank0. Please make sure that you also have zfs_enable="YES" in /etc/rc.conf. This only really affects mounting of additional ZFS datasets and ZFS swap, but you should have this option enabled if you are using ZFS.

I'm wondering if the geom_multipath class may not be loaded or making the device available when the ZFS module first tries to import the pools?

Regarding the incorrect /jails/jails path, why are you using the altroot option? This is only really required if you are mounting a pool exported from another system that may have 'dangerous' mount points set. For example, you pull a root ZFS disk from an old machine, put it in your live ZFS machine and know that it has datasets configured to mount on somewhere important (like /home). In order to stop the old pool mounting over the top of your live /home filesystem, you set an altroot, which gets prepended to any mounts. (If you import with altroot=/mnt, a dataset configured to mount on /home will mount on /mnt/home instead).

If you want your pool mounted on /jails, just do the following:

Code:
zpool create tank1 /dev/mydevice
zfs set mountpoint=/jails tank1

In fact, if the pool is only going to be used for jails, you could just do the following:

Code:
zpool create jails /dev/mydevice

By default a zpool contains one dataset, with the same name as the pool, mounted on /poolname, so you'll have a ZFS dataset mounted on /jails.

Personally, I would probably do the following - just in case I ever wanted to put any other data on the pool and didn't want it mixed up with my jails. I've never liked using 'tank' for pool names, especially if you're getting to the point of having tank0, tank1, etc. I generally use storage for large data storage pools, and system or sys for a root pool (using the system/ROOT/default boot environments style layout), although some people name them after the hostname to make future access on other systems easier.

Code:
zpool create storage /dev/mydevice
zfs create -o mountpoint=/jails -o compress=on storage/jails
 
I restarted the installation from scratch to practice. I renamed tank0, zroot and tank1, jails. I'm agree with you it's more comprehensible.

Thanks usdmatt, it worked. When I rebooted the server my zpool jails was automatically mounted :e and I didn't use -o altroot.

camcontrol devlist | grep lun\ 0
gmultipath label -v storage /dev/da0 /dev/da1
mkdir /jails
zpool create jails /dev/multipath/storage
zfs set checksum=fletcher4 jails
zfs set mountpoint=/jails jails



To finish I've a last question about the multipath. I would like to test my redundancy.

gmultipath status

NAME STATUS COMPONENTS
storage optimal da0 (ACTIVE)
da1 (PASSIVE)
I unpluged the cable for da0 and it switched on da1. It's OK but when I plug again da0, I've a warning light on my disk array because da0 isn't my main path (da0: PASSIVE and da1: ACTIVE).

How I can switch the states ?


Thanks
 
The best solution is that da0 and da1 shared the data transmission. But if I create my multipath with -A (da0: ACTIVE and da1: ACTIVE), it's very slow and I think it doesn't share the transmission.
 
Ok I find it. If I want to force da0 in ACTIVE mode

gmultipath prefer storage /dev/da0


If I want to change the mode of multipath (Active/Active)

gmultipath configure -A storage


But in A/A mode is slower than A/P mode. Why?
 
NiReaS said:
If I want to change the mode of multipath (Active/Active)

gmultipath configure -A storage


But in A/A mode is slower than A/P mode. Why?

Probably it is cache effects of the storage. Controllers probably need some time to failover. Quite few devices really support Active/Active. If it is slow only on writes (as I've seen on some SAS HHDs), you may try Active/Read mode instead.
 
Back
Top