Hi,
I have installed and configured a simple RAIDZ ZFS system on FreeBSD 9.1-RELEASE-p4. My server configuration is:
DELL PowerEdge T110 E3-1270v2
Intel Xeon E3-1270
Memory: 32GB ECC RAM
Harddisk 1: 500GB HDD
Harddisk 2: 500GB HDD
Harddisk 3: 500GB HDD
I have configured my ZFS pool following this:
I have done some modifications to loader.conf after restart trying to tuning some stuff:
Everything is working fine although I just want to make some improvements regarding read/write performance because I have to move few clients with heavy traffic CMS websites in one week. I have 2 HDD bays free left in my server chassis.
I just want to add SSD drive for cache and for what i have been googling around I have understood that I could do that on an existing ZFS pool defined. I also read that I could partition a SSD drive for L2ARC and ZIL.
My provider could supply following SSD drives (do not know yet which is better for my scenario):
1. should I use 2 free bays remaining to add 2 SSD drives and do a mirroring cache? or would it suffice to add another 500 HDD drive to existing zroot pool and 1 SSD drive as cache?
2. Would it be better/safer to use both ZIL and L2ARC as different partitions on SSD drives?
i mean is that correct:
If so, should I delete/empty vfs.zfs.arc_max and vfs.zfs.arc_meta_limit variables from loader.conf?
Thank you very much for your time.
Kind regards,
eduardm
I have installed and configured a simple RAIDZ ZFS system on FreeBSD 9.1-RELEASE-p4. My server configuration is:
DELL PowerEdge T110 E3-1270v2
Intel Xeon E3-1270
Memory: 32GB ECC RAM
Harddisk 1: 500GB HDD
Harddisk 2: 500GB HDD
Harddisk 3: 500GB HDD
I have configured my ZFS pool following this:
Code:
# gpt
gpart create -s gpt ada0
gpart create -s gpt ada1
gpart create -s gpt ada2
gpart add -b 34 -s 94 -t freebsd-boot ada0
gpart add -b 34 -s 94 -t freebsd-boot ada1
gpart add -b 34 -s 94 -t freebsd-boot ada2
gpart add -t freebsd-zfs -l disk0 ada0
gpart add -t freebsd-zfs -l disk1 ada1
gpart add -t freebsd-zfs -l disk2 ada2
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2
# create the zroot
zpool create -f zroot raidz /dev/gpt/disk0 /dev/gpt/disk1 /dev/gpt/disk2
zpool set bootfs=zroot zroot
zpool set listsnapshots=on zroot
zpool set autoreplace=on zroot
zpool set autoexpand=on zroot
zfs set checksum=fletcher4 zroot
zfs set atime=off zroot
zfs set copies=2 zroot
zfs set compression=lzjb zroot
zfs set mountpoint=/mnt zroot
zpool export zroot
zpool import -o cachefile=/var/tmp/zpool.cache zroot
# create the pool
zfs create zroot/tmp
zfs create zroot/usr
zfs create zroot/usr/ports
zfs create zroot/usr/ports/distfiles
zfs create zroot/usr/ports/packages
zfs create zroot/usr/src
zfs create zroot/var
zfs create zroot/var/crash
zfs create zroot/var/db
zfs create zroot/var/db/mysql
zfs create zroot/var/db/pgsql
zfs create zroot/var/db/pkg
zfs create zroot/var/empty
zfs create zroot/var/log
zfs create zroot/var/mail
zfs create zroot/var/run
zfs create zroot/var/tmp
zfs create zroot/var/sonya
zfs create zroot/var/nginx
zfs create -o compression=on -o exec=on -o setuid=off zroot/tmp
zfs create -o compression=lzjb -o setuid=off zroot/usr/ports
zfs set copies=1 zroot/usr/ports
zfs create -o compression=off -o exec=off -o setuid=off zroot/usr/ports/distfiles
zfs create -o compression=off -o exec=off -o setuid=off zroot/usr/ports/packages
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/usr/src
zfs set copies=1 zroot/usr/src
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/crash
zfs create -o exec=off -o setuid=off zroot/var/db
zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/db/pkg
zfs create -o exec=off -o setuid=off zroot/var/empty
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/log
zfs create -o compression=gzip -o exec=off -o setuid=off zroot/var/mail
zfs create -o exec=off -o setuid=off zroot/var/run
zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/tmp
zfs create -o compression=off -o exec=off -o setuid=off zroot/var/sonya
zfs set recordsize=16k zroot/var/db/mysql
zfs set recordsize=8k zroot/var/db/pgsql
# swap create
zfs create -V 4G zroot/swap
zfs set org.freebsd:swap=on zroot/swap
zfs set checksum=off zroot/swap
zfs set copies=1 zroot/swap
# Create a symlink to /home and fix some permissions.
chmod 1777 /mnt/tmp
cd /mnt ; ln -s usr/home home
chmod 1777 /mnt/var/tmp
# install freebsd
sh
cd /usr/freebsd-dist
export DESTDIR=/mnt
for file in base.txz lib32.txz kernel.txz doc.txz ports.txz src.txz;
do (cat $file | tar --unlink -xpJf - -C ${DESTDIR:-/}); done
# copy zpool cache
cp /var/tmp/zpool.cache /mnt/boot/zfs/zpool.cache
# make sure zfs starts on boot
echo 'zfs_load="YES"' >> /mnt/boot/loader.conf
echo 'vfs.root.mountfrom="zfs:zroot"' >> /mnt/boot/loader.conf
echo 'vfs.root.mountfrom.options=rw' >> /mnt/boot/loader.conf
echo 'vfs.zfs.prefetch_disable=1' >> /mnt/boot/loader.conf
echo 'vfs.zfs.arc_max="16384M"' >> /mnt/boot/loader.conf
echo 'vfs.zfs.arc_meta_limit="128M"' >> /mnt/boot/loader.conf
echo 'aio_load="YES"' >> /mnt/boot/loader.conf
# otherwise bsd will complain
touch /mnt/etc/fstab
# correct mountpoints
zfs set readonly=on zroot/var/empty
zfs umount -af
zfs set mountpoint=legacy zroot
zfs set mountpoint=/tmp zroot/tmp
zfs set mountpoint=/usr zroot/usr
zfs set mountpoint=/var zroot/var
I have done some modifications to loader.conf after restart trying to tuning some stuff:
Code:
zfs_load="YES"
vfs.root.mountfrom="zfs:zroot"
vfs.root.mountfrom.options=rw
kern.ipc.semmni=256
kern.ipc.semmns=512
kern.ipc.semmnu=256
kern.ipc.semmap=256
kern.dfldsiz=1073741824
kern.maxbcache=64M
vfs.zfs.prefetch_disable=1
vfs.zfs.arc_max="16384M"
vfs.zfs.arc_meta_limit="128M"
accf_data_load="YES"
accf_http_load="YES"
aio_load="YES"
cc_htcp_load="YES"
net.inet.tcp.tcbhashsize=4096
hw.em.rxd=2048
hw.em.txd=2048
hw.em.rx_process_limit="-1"
net.link.ifqmaxlen=512
Everything is working fine although I just want to make some improvements regarding read/write performance because I have to move few clients with heavy traffic CMS websites in one week. I have 2 HDD bays free left in my server chassis.
I just want to add SSD drive for cache and for what i have been googling around I have understood that I could do that on an existing ZFS pool defined. I also read that I could partition a SSD drive for L2ARC and ZIL.
My provider could supply following SSD drives (do not know yet which is better for my scenario):
Now there are the questions.120GB:
240GB:
OZC Agility 3
Kingston SSDNow KC300
- Crucial M4 and M5 series
1. should I use 2 free bays remaining to add 2 SSD drives and do a mirroring cache? or would it suffice to add another 500 HDD drive to existing zroot pool and 1 SSD drive as cache?
2. Would it be better/safer to use both ZIL and L2ARC as different partitions on SSD drives?
i mean is that correct:
Code:
gpart create -s gpt ssd1
gpart add -s 4G -t freebsd-zfs ssd1 # ZIL partition
gpart add -t freebsd-zfs ssd1 # L2ARC partion
gpart create -s gpt ssd2
gpart add -s 4G -t freebsd-zfs ssd2 # ZIL partition
gpart add -t freebsd-zfs ssd2 # L2ARC partion
# adding ZIL/L2ARC to pool ?
zpool add zroot cache /dev/gpt/ssd1b /dev/gpt/ssd2b log mirror /dev/gpt/ssd1a /dev/gpt/ssd2a
Thank you very much for your time.
Kind regards,
eduardm