4x2TB disk partition help

I'm confused as to where the OS should be installed if I have four drives and I want to create 2x2 mirror vdev ZFS pool. Do I install the OS on the pool or partition one of the drives as UFS and install the OS on that? Or do I have to get a fifth drive just for the OS and make that a UFS file system?
 
@SirDice

No, you wouldn´t, since you cannot have boot/root with multiple vdevs in pool.

@einthusan

I usually create one mirrored pool with 2x USB-drives for boot/root, and then another, big pool for everything else, like /usr, and /var.

/Sebulon
 
Sebulon said:
@SirDice

No, you wouldn´t, since you cannot have boot/root with multiple vdevs in pool.
True, but that's not the only way to set things up.

Code:
> uname -a
FreeBSD zfstest 9.0-RELEASE FreeBSD 9.0-RELEASE #0: Tue Jan  3 07:46:30 UTC 2012     
root@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC  amd64
> zpool status
  pool: zdata
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zdata       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada2    ONLINE       0     0     0
            ada3    ONLINE       0     0     0

errors: No known data errors

  pool: zroot
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada0p3  ONLINE       0     0     0
            ada1p3  ONLINE       0     0     0

errors: No known data errors
> zfs list
NAME         USED  AVAIL  REFER  MOUNTPOINT
zdata         91K  9.78G    31K  /zdata
zroot        593M  7.23G   344M  legacy
zroot/home  39.5K  7.23G  39.5K  /home
zroot/tmp     37K  7.23G    37K  /tmp
zroot/usr    248M  7.23G   248M  /usr
zroot/var    231K  7.23G   231K  /var
>
 
I did a small test in a virtual machine. While it's not possible to add another mirrored vdev to an existing ZFS root pool (it throws an error if you try), it does appear possible if you create the pool before installation.

Code:
> zpool status
  pool: zroot
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada0p3  ONLINE       0     0     0
            ada1p3  ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            ada2    ONLINE       0     0     0
            ada3    ONLINE       0     0     0

errors: No known data errors
> zfs list
NAME         USED  AVAIL  REFER  MOUNTPOINT
zroot        593M  21.0G   344M  legacy
zroot/home  39.5K  21.0G  39.5K  /home
zroot/tmp     37K  21.0G    37K  /tmp
zroot/usr    248M  21.0G   248M  /usr
zroot/var    220K  21.0G   220K  /var
>

There are a million ways to set it up but I used this as an example: http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE
The only thing I changed was the zpool create. Mine looked like this:
# zpool create -o altroot=/mnt zroot mirror /dev/gpt/disk0 /dev/gpt/disk1 mirror /dev/ada2 /dev/ada3
 
@SirDice

Wow, I´ll have to look into that. It all has to do with the bootfs-flag. You need to have it set to be able to boot, but as soon as you´ve set it, you are denied from adding more vdev´s to the pool.

This is what happened to me simulating it in a virtual machine:
Code:
[CMD="#"]zpool status pool1[/CMD]
  pool: pool1
 state: ONLINE
  scan: resilvered 12.1M in 0h0m with 0 errors on Fri Feb 24 06:28:43 2012
config:

	NAME           STATE     READ WRITE CKSUM
	pool1          ONLINE       0     0     0
	  raidz1-0     ONLINE       0     0     0
	    gpt/disk1  ONLINE       0     0     0
	    gpt/disk2  ONLINE       0     0     0
	    gpt/disk3  ONLINE       0     0     0

errors: No known data errors
[CMD="#"]zpool get bootfs pool1[/CMD]
NAME   PROPERTY  [B]VALUE[/B]       SOURCE
pool1  bootfs    [B]pool1/root[/B]  local
[CMD="#"]zpool add -f pool1 mirror gpt/disk{4,5}[/CMD]
cannot add to 'pool1': root pool can not have multiple vdevs or separate logs
However, with a different pool:
Code:
[CMD="#"]zpool get bootfs rpool[/CMD]
NAME   PROPERTY  [B]VALUE[/B]   SOURCE
rpool  bootfs    [B]-[/B]       default
[CMD="#"]zpool add rpool mirror gpt/disk{4,5}[/CMD]
[CMD="#"]zpool status rpool[/CMD]
  pool: rpool
 state: ONLINE
  scan: resilvered 180K in 0h0m with 0 errors on Fri Feb 24 14:11:54 2012
config:

	NAME           STATE     READ WRITE CKSUM
	rpool          ONLINE       0     0     0
	  mirror-0     ONLINE       0     0     0
	    gpt/disk6  ONLINE       0     0     0
	    gpt/disk7  ONLINE       0     0     0
	  mirror-1     ONLINE       0     0     0
	    gpt/disk4  ONLINE       0     0     0
	    gpt/disk5  ONLINE       0     0     0

errors: No known data errors


So perhaps it is possible to change the approach to:
Code:
[CMD="#"]zpool get bootfs rpool[/CMD]
NAME   PROPERTY  [B]VALUE[/B]   SOURCE
rpool  bootfs    [B]-[/B]       default
[CMD="#"]zpool add rpool mirror gpt/disk{4,5}[/CMD]
[CMD="#"]zpool status rpool[/CMD]
  pool: rpool
 state: ONLINE
  scan: resilvered 180K in 0h0m with 0 errors on Fri Feb 24 14:11:54 2012
config:

	NAME           STATE     READ WRITE CKSUM
	rpool          ONLINE       0     0     0
	  mirror-0     ONLINE       0     0     0
	    gpt/disk6  ONLINE       0     0     0
	    gpt/disk7  ONLINE       0     0     0
	  mirror-1     ONLINE       0     0     0
	    gpt/disk4  ONLINE       0     0     0
	    gpt/disk5  ONLINE       0     0     0

errors: No known data errors
[CMD="#"]zpool set bootfs=rpool/root rpool[/CMD]
[CMD="#"]zpool get bootfs rpool[/CMD]
NAME   PROPERTY  [B]VALUE[/B]       SOURCE
pool1  bootfs    [B]rpool/root[/B]  local
But I have to test this in a virtual machine to have it verified.

/Sebulon
 
So my understanding is that I have to create two pools? One for root and one for the data. But that would mean I won't be able to create two mirrors and add the mirrors to the one pool. The reason why I want to put the two mirrors in one pool is for high I/O throughput. Would adding one mirror per pool (thus creating two pools) have the same performance gains? I doubt it.

This server is in a datacenter, I don't think they will add a USB stick.
 
einthusan said:
So my understanding is that I have to create two pools? One for root and one for the data. But that would mean I won't be able to create two mirrors and add the mirrors to the one pool. The reason why I want to put the two mirrors in one pool is for high I/O throughput. Would adding one mirror per pool (thus creating two pools) have the same performance gains? I doubt it.

This server is in a datacenter, I don't think they will add a USB stick.

Like SirDice said you can boot from a stripe mirror.

But if you want my advice avoid it. Get another disk for the OS.

George
 
Okay so another disk is not possible because the server can only hold 5 drives. Should I create a UFS root on a small portion of the disk instead?
 
OK, I´ve confirmed this procedure works. During install, choose Shell when asked for partitioning.

# gpart create -s gpt da0
# gpart create -s gpt da1
# gpart create -s gpt da2
# gpart create -s gpt da3
# gpart add -t freebsd-boot -s 64k da0
# gpart add -t freebsd-boot -s 64k da1
# gpart add -t freebsd-boot -s 64k da2
# gpart add -t freebsd-boot -s 64k da3
# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da1
# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da2
# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da3
# gpart add -t freebsd-zfs -l disk0 -b 2048 -a 4k da0
# gpart add -t freebsd-zfs -l disk1 -b 2048 -a 4k da1
# gpart add -t freebsd-zfs -l disk2 -b 2048 -a 4k da2
# gpart add -t freebsd-zfs -l disk3 -b 2048 -a 4k da3
Code:
[CMD="#"]diskinfo -v da0[/CMD]
	512         	# sectorsize
	2000398934016	# mediasize in bytes (1.8T)
	3907029168  	# mediasize in sectors
        ...

[CMD="#"]echo "2000398934016 / 1024000 - 1" | bc[/CMD]
1953513
# dd if=/dev/zero of=/tmp/tmpdsk2 bs=1024000 seek=1953513 count=1
# dd if=/dev/zero of=/tmp/tmpdsk3 bs=1024000 seek=1953513 count=1
# mdconfig -a -t vnode -f /tmp/tmpdsk2 md2
# mdconfig -a -t vnode -f /tmp/tmpdsk3 md3
# gnop create -S 4096 md2
# gnop create -S 4096 md3

# zpool create -O mountpoint=none -o cachefile=/tmp/zpool.cache -o autoexpand=on pool1 mirror md2.nop gpt/disk1 mirror md3.nop gpt/disk3

# zpool offline pool1 md2.nop
# mdconfig -d -u 2
# rm /tmp/tmpdsk2
# zpool replace pool1 md2.nop gpt/disk0

# zpool offline pool1 md3.nop
# mdconfig -d -u 3
# rm /tmp/tmpdsk3
# zpool replace pool1 md3.nop gpt/disk2

# zfs create -o mountpoint=legacy -o compress=on pool1/root
# zfs create pool1/root/usr
# zfs create pool1/root/usr/local
# zfs create pool1/root/usr/home
# zfs create pool1/root/var
# zfs create -o primarycache=none -o secondarycache=none -o compress=on -s -V 16g pool1/swap
(Size of swap is historically/hysterically decided on your amount of RAMx2)

# zpool set bootfs=pool1/root pool1

# mount -t zfs pool1/root /mnt
# mkdir /mnt/tmp
# mkdir /mnt/usr
# mkdir /mnt/var
# mount -t zfs pool1/root/usr /mnt/usr
# mount -t zfs pool1/root/var /mnt/var
# mkdir /mnt/usr/home
# mkdir /mnt/usr/local
# mount -t zfs pool1/root/usr/home /mnt/usr/home
# mount -t zfs pool1/root/usr/local /mnt/usr/local
Code:
[CMD="#"]ee /tmp/bsdinstall_etc/fstab[/CMD]
pool1/root            /           zfs    rw    0   0
tmpfs                 /tmp        tmpfs  rw    0   0
pool1/root/usr        /usr        zfs    rw    0   0
pool1/root/usr/home   /usr/home   zfs    rw    0   0
pool1/root/usr/local  /usr/local  zfs    rw    0   0
pool1/root/var        /var        zfs    rw    0   0
/dev/zvol/pool1/swap  none        swap   sw    0   0
# mkdir -p /mnt/boot/zfs
# cp /tmp/zpool.cache /mnt/boot/zfs/
Code:
[CMD="#"]ee /mnt/boot/loader.conf[/CMD]
autoboot_delay="5"
zfs_load="YES"
vfs.root.mountfrom="zfs:pool1/root"
# exit

When the server has rebooted, you login and:
# zpool scrub pool1

Then you are going to have this:
Code:
[CMD="#"]zpool status[/CMD]
  pool: pool1
 state: ONLINE
 scan: scrub repaired 0 in 0h0m with 0 errors on Mon Mar 26 13:04:19 2012
config:

	NAME           STATE     READ WRITE CKSUM
	pool1          ONLINE       0     0     0
	  mirror-0     ONLINE       0     0     0
	    gpt/disk0  ONLINE       0     0     0
	    gpt/disk1  ONLINE       0     0     0
	  mirror-1     ONLINE       0     0     0
	    gpt/disk2  ONLINE       0     0     0
	    gpt/disk3  ONLINE       0     0     0

errors: No known data errors
[CMD="#"]zdb | grep ashift[/CMD]
            ashift: 12
            ashift: 12
Two striped mirror vdevs in one pool, and it will be able to boot from all hard drives. I tested disconnecting disk0 and disk3 at the same time and then reboot; no problemo.

Oh, and one more thing;):
Code:
[CMD="#"]zfs get -r compressratio pool1[/CMD]
NAME                                        PROPERTY       VALUE  SOURCE
pool1                                       compressratio  2.38x  -
pool1/root                                  compressratio  2.38x  -
pool1/root/usr                              compressratio  1.76x  -
pool1/root/usr/home                         compressratio  1.05x  -
pool1/root/usr/local                        compressratio  2.00x  -
pool1/root/var                              compressratio  5.81x  -
pool1/swap                                  compressratio  1.00x  -

/Sebulon
 
I have a server which can only take four disks and I've set it up with a 2x mirrored vdev pool made from whole-disks:

Code:
  pool: pool0
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        pool0       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada0    ONLINE       0     0     0
            ada2    ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada3    ONLINE       0     0     0

errors: No known data errors

My entire FreeBSD installation resides on this pool.

How do I boot from it? The server has an internal USB port into which I've plugged a USB stick with a single UFS partition containing a copy of my /boot directory. The server boots the kernel from that, then mounts my root filesystem from the zpool.
 
jem said:
I have a server which can only take four disks and I've set it up with a 2x mirrored vdev pool made from whole-disks:

Code:
  pool: pool0
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        pool0       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada0    ONLINE       0     0     0
            ada2    ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada3    ONLINE       0     0     0

errors: No known data errors

My entire FreeBSD installation resides on this pool.

How do I boot from it? The server has an internal USB port into which I've plugged a USB stick with a single UFS partition containing a copy of my /boot directory. The server boots the kernel from that, then mounts my root filesystem from the zpool.

Nice setup, wish I was able to do the same but the data center won't allow such a thing.
 
Sebulon said:
OK, I´ve confirmed this procedure works. During install, choose Shell when asked for partitioning.

You're a savior, man. I followed these steps and was able to do a zfs root boot but like someone stated before, I wasn't able to add a mirror once bootfs was set. Then I was thinking about using the same guide but for all four drives during install, then I come back here to see that you have done just that, AND most importantly verified that it works. Thanks a million dollars!

http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE
 
@Sebulon I couldn't resist but drink some coffee and try to follow your steps. Turns out that there might be a problem at this line,

Code:
zpool create -O mountpoint=none -o cachefile=/tmp/zpool.cache -o autoexpand=on pool1 mirror md2.nop gpt/disk1 mirror md3.nop gpt/disk3

I get the following error,

Code:
cannot open 'gpt/disk1': no such GEOM provider
must be a full path or shorthand device name

So I tried this this,
Code:
zpool create -O mountpoint=none -o cachefile=/tmp/zpool.cache -o autoexpand=on pool1 mirror md2.nop [B]/dev/[/B]gpt/disk1 mirror md3.nop [B]/dev/[/B]gpt/disk3

But that didn't work either. So I thought maybe the output is supposed to be that, and went ahead with the next step,

Code:
zpool offline pool1 md2.nop

but I got the following error,
Code:
cannot open 'pool1': no such pool

Any reason why my machine might be doing this?
 
Oh shit my bad! I didn't do one of the steps correctly, the part where you had to change the values of disk#. I just used the history command and messed it up lol. Sorry.
 
Back
Top