zfs questions

what would be best ?

I got the folowing disks, 1TB,1TB,1TB,1TB,320GB, 120GB, 160GB and more to be add later on.

what would be the best configuration getting the most "disk capicity" + redundancy

what does a raidz add compaired to a normal pool.
Would it be better to add the drives (if possible) to a "JBOD" config or should I make a split config 1 raidz and 1 "normal pool" ?


It seems that a 4kib sector drive allignment issue is also applicable to a ZFS config (so i need to use a slice instead of raw disk ?
 
I would setup the following:

- Mirrored pool using the 120GB and 160GB disks - resulting in 40GB of wasted space. If you replace the 120GB with another 160GB, the pool will automatically grow (assuming you're using the raw disks, not partitions)

- Single disk pool using the 320GB - Can add a second 320GB disk later to add mirroring.

- Raidz pool using the four raw 1TB disk devices - single disk resilience

By "normal pool", I assume you mean simple striping over several disks. This offers no resilience. raidz is similar to raid5. You sacrifice the capacity of one disk, but can suffer a single disk failure without data loss.

I doubt the 4kB sector alignment issue will exist if you're using whole raw disk devices, as ZFS will be using everything from sector 0 to the end of the disk.
 
For me a single disk failure is acceptable loss,

So you can change a single disk to mirrord to raidz without data-loss ?

I tried once a zpool raidz on the raw device but performance was 6mb/s
but never found the source upgraded from the p4/pci bus to amd atlon x2 cpu, and pcie bus motherboard. (ram stayed 2 gb) I am currently testing this.
 
vso1 said:
For me a single disk failure is acceptable loss,

So you can change a single disk to mirrord to raidz without data-loss ?

you can add another disk to a vdev to get a mirror but can _not_ migrate to raidz without recreating the pool
 
If you want pure I/O throughput, go with mirror vdevs.

If you want the most raw disk space, go with raidz vdevs.

With the disks you have available, assuming they'll all fit into one server, I'd recommend the following:
  • an OS pool using the 120 GB and the 160 GB in a mirror vdev, that will have filesystems for /, /usr, /usr/local, /usr/src, /usr/obj, /usr/ports, /var, and so on (zpool create ospool mirror da0 da1)
  • a storage pool using the four 1 TB disks in a raidz1 vdev, that will have filesystems for /home and whatever you want to use for storage (zpool create storage raidz1 da2 da3 da4 da5)
In the future, if you have the space in the system, you can add another 4-drive raidz1 to the storage pool to increase performance and storage space.

If you can live with only 2 TB of disk space, and want the most I/O throughput, then use 2x mirror vdevs for the storage pool (zpool create storage mirror da2 da3 mirror da4 da5). You can expand that later by adding more mirror vdevs (zpool add storage mirror da6 da7).
 
phoenix said:
  • an OS pool using the 120 GB and the 160 GB in a mirror vdev, that will have filesystems for /, /usr, /usr/local, /usr/src, /usr/obj, /usr/ports, /var, and so on (zpool create ospool mirror da0 da1)

You won't be able to set up an pool for the OS using raw disk devices and be able to boot from it. The pool will have to be built on partitions, as described here: http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/Mirror
 
You can, if you create the mirror in two parts:
Code:
# zpool create ospool da0     (the 120 GB disk)
# zpool attach ospool da0 da1       (creates a mirror out of the 120 GB and the 160 GB disks)
By attaching the second disk, you create a mirror vdev out of da0 and da1. As they are different sizes, ZFS will only use the first 120 GB of the larger disk.

Later, if you want to replace the 120 GB drive, you can either attach it (to create a 3-way mirror), wait for the resilver to complete, then detach the smallest disk (best method for mirror vdevs, as you never lose redundancy); or do the "normal" offline, swap, replace process.

ZFS will use the smallest disk in the vdev as the size for all the disks in the vdev.
 
What I was getting at is that you can't boot from a pool based on raw disk devices. You have to partition the disks so that you can install the ZFS loader (zfsboot or gptzfsboot) into a partition at the beginning of the disk. Unless of course the root partition is left as UFS on a different disk.
 
phoenix said:
If you want pure I/O throughput, go with mirror vdevs.

[*]a storage pool using the four 1 TB disks in a raidz1 vdev, that will have filesystems for /home and whatever you want to use for storage (zpool create storage raidz1 da2 da3 da4 da5)[/list]
I did this part, although it didn't went like i expected,
[*] install amd64 8.0-rc1 on the 320gb disk
[*] created the raidz1 disk of the 4x 1tb
[*] created the raw diskspace zpool (no mirror)
[*] then I edited the /boot/loader.conf and added aio_load=yes and ahci_load=yes to it.
[*] reboot
[*] mountroot error ARGh! (a lot of swearing)
[*] found out that with "?" and ad4 was renamed to ada0 .. once that was corrected ..
[*] istgt install --> lun0 /data/iscsitgt 26000GB (can always extend if needed)

now write performance are "bad" ???
Code:
zpool iostat 
data         158G  3.47T      0    169  1.70K  12.2M
data         158G  3.47T      0    173  1.20K  12.3M
data         158G  3.47T      0    135  2.20K  10.1M
data         158G  3.47T      0    152  2.00K  10.6M
data         158G  3.47T      0    118      0  8.55M
data         158G  3.47T      0    105    102  7.17M
data         158G  3.47T      1     97  2.30K  6.60M

and
Code:
iostat ada1 ada2 ada3 ada4 5 gives
      tty            ada1             ada2             ada3             ada4             cpu
 tin  tout  KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
   0    30 28.42  77  2.14  28.27  77  2.14  27.96  78  2.14  26.93  81  2.13   0  0  2  2 96
   0    68 28.29 114  3.15  28.30 114  3.16  28.01 115  3.15  27.52 116  3.12   0  0  3  2 95
   0    30 29.25 146  4.18  29.19 146  4.15  28.75 148  4.15  28.17 150  4.12   0  0  4  3 92
   0    30 29.09 149  4.23  29.14 149  4.25  28.83 150  4.22  28.21 152  4.18   0  0  3  3 93
   0    30 29.27 153  4.38  29.22 153  4.37  28.77 155  4.35  28.16 156  4.30   0  0  4  3 93

hmmm I am not to happy about these figures (should I be ?)




If you can live with only 2 TB of disk space, and want the most I/O throughput, then use 2x mirror vdevs for the storage pool (zpool create storage mirror da2 da3 mirror da4 da5). You can expand that later by adding more mirror vdevs (zpool add storage mirror da6 da7).

If I had 4x 2tb I think i would be concidering you'r second suggestion ..
actualy I have 2 "empty slots" to add disks.

The 120+160 wil be become "raw" space (named "bootspace"), hoping this will add more "speed" enough to boot 2 desktops, who each has 200gb + drives .. once they can boot from ISCSI (gPXE project) the disks will move to the fileserver an added as mirror to "bootspace". Since there is no need to keep them in the machines. Free-ing a sata pci controller card. with that one i can another 2 disks (spare or something like that)
thinking about this nice 4x 2,5 enclosure

@Jem
booting from zfs wil be done when the 320 can be added to a pool .. and that I am confident that all is tweaked and tuned .. 1 step at the time :)
 
jem said:
What I was getting at is that you can't boot from a pool based on raw disk devices. You have to partition the disks so that you can install the ZFS loader (zfsboot or gptzfsboot) into a partition at the beginning of the disk. Unless of course the root partition is left as UFS on a different disk.

The loader doesn't get installed into a partition. The loader is put into the first sectors of the disk, which are not used by any slice, partition, or filessytem. How do you think "dangerously dedicated" mode worked in the past, where there are no slices on the disk?
 
HADES said:
@ vso1 My setup is 3x500G raidz1 pool, and I can pull off 150MB/s write to the pool from mem.

sounds nice, my speed(s) are not near that and I am getting freaked out, 6 mb/s and I have 2.4 tb to write (takes a long time)


the 6 mb/s isn't even IDE speed should my raidz drives be instead of raw disks, slices instead ?
 
Back
Top