zfs on i386 PAE kernels

Hi,

I'm running FreeBSD 8.4RC2 on an i386 (32bit PAE enabled kernel) with 12 GB RAM. How can I tweak zfs parameters to make use of RAM above the 4 GB limit on the i386 architecture? The full 12 GB RAM is seen by the kernel and is hence available to be utilized by user-space. However, when setting different values for variables like vm.kmem_size_max and vfs.zfs.arc_max it seems like only the first 4 GB is available for kernel/zfs usage.

Regards

Magnus
 
linmag7 said:
I'm running FreeBSD 8.4RC2 on an i386 (32bit PAE enabled kernel) with 12 GB RAM. How can I tweak zfs parameters to make use of RAM above the 4 GB limit on the i386 architecture?
Use a 64 bit OS.

The full 12 GB RAM is seen by the kernel and is hence available to be utilized by user-space.
No, it isn't. It's only available to PAE enabled applications.
 
In PAE mode CPU virtual address space is still 32bit. From those full 4GB 1GB reserved for kernel, and 3GB for user-space. Single process still can not allocate more then about 2-2.5GB of RAM due to reservations done for memory mapped libraries, etc. The only way to use all 12GB for RAM in that case is to run many processes, or use unmapped memory.

I've never experimented with PAE, but IIRC UFS should be able to use more memory for buffer cache by not mapping it all to the kernel address space. Recent changes in 10-CURRENT should allow even do disk I/O without mapping cache blocks to the kernel address space. Unfortunately, that doesn't happen for ZFS because with all its additional functionality, such as checksumming, compression, etc, it had to map everything into the kernel address space, and so 1GB of KVA on i386 creates heavy bottleneck for it.
 
I wonder what the reasoning is for sticking to i386? You do know you can run 32 bit binaries on a 64 bit FreeBSD?
 
Only reason that I can think of is when you have a server class machine from around 2005-2006 with an Intel CPU that didn't yet have EMT64 support but you still want to make use of the machine.
 
@kpa: Who pays your power bill? The only solution is to throw the PAE hardware away and replace it with AMD64 compatible hardware. Sometime you can upgrade with two cheap CPUs from Ebay if it's worth the trouble. Which brings me back to "who pays for the power". Yes the old 15k RPM U320 disks did have more IOPS than current day 7200 RPM SATA disks, but you need a JBOD full of them to replace two SSDs. So I wouldn't even recommend them for a lab environment because of the bottlenecks (e.g. a shared PCI-X bus between HBA and NIC).
 
Last edited by a moderator:
I don't have such hardware myself, I was just suggesting one possible reason to use such hardware and 32-bit version FreeBSD.
 
kpa said:
Only reason that I can think of is when you have a server class machine from around 2005-2006 with an Intel CPU that didn't yet have EMT64 support but you still want to make use of the machine.

This is pretty much the case for me, I have a Xeon 3 GHz 85W TDP which now serves as my NAS. It resides in the basement where I in any case need a radiator or some other heat source to to keep the place warm. I might still do the Ebay thing and get a low cost 64bit system but I figure that my current config might perform well enough for the time being.
 
@mav@:

Thanks for your reply, I guess I might not have so much use for the memory above 4 GB. Stuff like nfs-server and samba might be able to make use of PAE memory but for "home use" in the case of a my NAS setup, the user-space memory available in the first 4 GB memory might be enough already . ZFS will have to make due with 1 GB memory for cache. I might be better of putting the disks in my Sparc, a Sun Blade-2000 which has 8 GB RAM.
 
Last edited by a moderator:
Back
Top