ZFS performance lower after update

Hi,

I'm using an old Pentium 3 dual CPU system with 4GB memory as my NAS box, and performance seems to have gone down after updating to FreeBSD 10 from 8.2. I have four 1.5TB disks in raidz1, and write performance with dd used to be about 50-60MB/s. Now it's about 40MB/s. Also, read performance over network has gone down. Earlier I got something like 60MB/s over Samba, and now only 40MB/s.. However, read speeds with dd seem to be close to 100MB/s (with /dev/null as output). Any ideas where to look at to get performance to previous level without investing in new hardware?

Also, when reading over network, CPU seems to be 40-50% idle.
 
If you're interested in 10.0 I can recommend updating to 10.0-STABLE. As with previous .0 versions not everything may be tweaked to perfection yet. You don't have to wait for 10.1 to get fixes and updates if you track -STABLE. It has always been good to me, at least for my home situation.
 
I updated to 10.0-STABLE according to these instructions: http://www.bsdnow.tv/tutorials/stable-current. However, at first I got kernel panics when booting, and after I commented out row
Code:
zfs_enable="YES"
from /etc/rc.conf, system booted (I have OS on separate disk) but when I tried zpool status, I again got a kernel panic.


Here's the core.txt.1 file that was generated during kernel panic.

Any suggestions how to fix this?
 
Do you have:
Code:
zfs_load="YES"
in your /boot/loader.conf file? Actually might be good to share your entire /boot/loader.conf with us.
 
I didn't, but adding it didn't have any effect. Anyways, here's my /boot/loader.conf:

Code:
zfs_load="YES"
vm.kmem_size="2048M"
vfs.zfs.prefetch_disable="0"
vfs.zfs.txg.timeout="5"
vfs.zfs.arc_max="1792M"
vfs.zfs.arc_min="1024M"
vfs.zfs.vdev.min_pending="4"
vfs.zfs.vdev.max_pending="8"

accf_data_load="YES"
accf_http_load="YES"
ahci_Load="YES"
aio_load="YES"
cc_htcp_load="YES"

hw.igb.max_interrupt_rate="32000"
hw.igb.num_queues="1"

However, I tried commenting out everything except zfs_load="YES" and I still got kernel panics.
 
trh411 said:
Mike234534 said:
I updated to 10.0-STABLE according to these instructions: http://www.bsdnow.tv/tutorials/stable-current. However, at first I got kern
What does uname -a show after your upgrade to FreeBSD-10-STABLE?

It shows this:

Code:
FreeBSD ruoska.local 10.0-STABLE FreeBSD 10.0-STABLE #2 r262627: Sat Mar  1 14:31:57 EET 2014     mikael@ruoska.local:/usr/obj/usr/src/sys/RUOSKA  i386

I used custom kernel to enable more than 512MB kernel memory. I had put row
Code:
options KVA_PAGES=640
and commented out I486 and I386 CPU types in configuration file. Otherwise the config file is similar to GENERIC.
 
Mike234534 said:
Code:
FreeBSD ruoska.local 10.0-STABLE FreeBSD 10.0-STABLE #2 r262627: Sat Mar  1 14:31:57 EET 2014     mikael@ruoska.local:/usr/obj/usr/src/sys/RUOSKA  i386

I used custom kernel to enable more than 512MB kernel memory. I had put row
Code:
options KVA_PAGES=640
and commented out I486 and I386 CPU types in configuration file. Otherwise the config file is similar to GENERIC.
How much physical memory is on the system?

I don't know how much help I can be going forward. I have no experience building or running FreeBSD on the i386 architecture. Sorry.
 
Mike234534 said:
I used custom kernel to enable more than 512MB kernel memory. I had put row
Code:
options KVA_PAGES=640
and commented out I486 and I386 CPU types in configuration file. Otherwise the config file is similar to GENERIC.
According to KVA_PAGES, setting KVA_PAGES=640 will divide the virtual address space into one 2.5 GB chunk for kernel space and one 1.5 GB chunk for user space. Is that what you want? Or maybe I'm misreading it?
 
trh411 said:
Mike234534 said:
I used custom kernel to enable more than 512MB kernel memory. I had put row
Code:
options KVA_PAGES=640
and commented out I486 and I386 CPU types in configuration file. Otherwise the config file is similar to GENERIC.
According to KVA_PAGES, setting KVA_PAGES=640 will divide the virtual address space into one 2.5 GB chunk for kernel space and one 1.5 GB chunk for user space. Is that what you want? Or maybe I'm misreading it?

Yes, that's intentional. ZFS uses kernel space for cache, so that's required to use most of the memory for cache.
 
What you're doing with KVA_PAGES isn't a good place to start the ZFS tuning on i386. I would revert that to the default and start with these in /boot/loader.conf

Code:
vm.kmem_size_max="512M"
vfs.zfs.arc_max="256M"

You're asking alot with the kernel memory address space set to 2GBs, that has to come with a price. I'd say your performance problems stem from this and changes between FreeBSD10 and the earlier versions. The i386 isn't really a good platform for such use, consider finding some replacement hardware that can run the amd64 version of FreeBSD.
 
kpa said:
What you're doing with KVA_PAGES isn't a good place to start the ZFS tuning on i386. I would revert that to the default and start with these in /boot/loader.conf

Code:
vm.kmem_size_max="512M"
vfs.zfs.arc_max="256M"

You're asking alot with the kernel memory address space set to 2GBs, that has to come with a price. I'd say your performance problems stem from this and changes between FreeBSD10 and the earlier versions. The i386 isn't really a good platform for such use, consider finding some replacement hardware that can run the amd64 version of FreeBSD.

I just tried with a kernel compiled with generic settings and those settings. I still got kernel panic.
 
I just reverted the system to 10.0.0-release, and everything works fine. It seems like there are some problems in ZFS' 32-bit implementation in 10-stable..
 
Although it's better to use amd64 when using ZFS it shouldn't panic on i386. There have been some updates for ZFS after the release that's why I suggested updating to -STABLE, but apparently one of them breaks ZFS on i386. It might be a good idea to report this on the freebsd-fs@ mailing list.
 
SirDice said:
Although it's better to use amd64 when using ZFS it shouldn't panic on i386. There have been some updates for ZFS after the release that's why I suggested updating to -STABLE, but apparently one of them breaks ZFS on i386. It might be a good idea to report this on the freebsd-fs@ mailing list.

OK, what's the full address of the list? And what should I mention on the mail.
 
Mike234534 said:
OK, what's the full address of the list? And what should I mention on the mail.
Full address is freebsd-fs@freebsd.org. I would include the output of uname-a since it will will provide the basics of your FreeBSD release, architecture, etc. in a concise, easily recognizable format. I would also provide a brief description of the problem and at least the ZFS related statements in /boot/loader.conf and /etc/rc.conf. And lastly, provide the backtrace from the vmcore.N file with a pastebin link to the full vmcore.N file.

This should be a good start. If someone needs more info they will request it.
 
Back
Top