Strange zfs problem after version upgrade

Hi,
yesterday I rebuilt from latest releng_8 sources a low spec machine running FreeBSD 8.2. Upgrade finished peacefully w/o any trouble. Since the supported zfs version was bumped I upgraded the single pool present and its datasets as follows: zpool v15 -> zpool v28 and zfs v4 -> zfs v5. Just like the core OS those upgraded properly with the respectful messages being issued to confirm that.

However a strange problem arose later when I tried to update some ports software. The sources for those were supposed to download to a zfs fs but instead I saw some complain messages. After having a look into it I found out that all freshly upgraded zfs datasets are in something like a weird readonly state. The pool structure is all intact, all data present and available to read but any write-related operation fails with
Code:
Cannot allocate memory
error. Although I hadn't changed anything of the working before the upgrade setup, I double-checked all permissions, acls, zfs properties, etc. of the affected datasets and all seems fine. After an export-import combo of the pool the root zfs fs resumed normal operation. I also found out that all newly created zfs sets work like a charm while the upgraded old ones having exactly the same properties "Cannot allocate memory" on write events.

Has anyone had that before or any ideas how to remedy?

The box I run this system on has only 1GB RAM so I know it's low. I followed the zfs tuning guide and have a setup like the low memory example there. So far it worked and like I mentioned above it still does for the new sets.


Cheers
 
chmmr said:
The box I run this system on has only 1GB RAM so I know it's low. I followed the zfs tuning guide and have a setup like the low memory example there. So far it worked and like I mentioned above it still does for the new sets.
Is it possible the deduplication attribute was enabled somehow? The dedup tables consume quite a bit of RAM. It is also possible that code size growth has pushed you just past some important limit for your system. While I don't normally suggest this, you might want to try building a custom kernel with everything not needed disabled.

Is the 1GB a hard limit based on board/chipset, or can you add more memory to it?
 
Thanks for the reply, Terry.
Terry_Kennedy said:
Is it possible the deduplication attribute was enabled somehow?
Doesn't seem so - the dedup property is set to off for all datasets and that value is listed as default - so no changes there.
Terry_Kennedy said:
It is also possible that code size growth has pushed you just past some important limit for your system.
That is possible indeed but doesn't explain why the root set of the zfs fs hierarchy and all newly created children work but the upgraded old children don't :\
Terry_Kennedy said:
While I don't normally suggest this, you might want to try building a custom kernel with everything not needed disabled.
I do run a custom kernel and it's quite minimal - have even some drivers for hardware present on the system but not used too frequently load on demand as modules. Before posting tried GENERIC as well but got same result.
Terry_Kennedy said:
Is the 1GB a hard limit based on board/chipset, or can you add more memory to it?
The machine is an old laptop (about 4 years - core duo, 1GB RAM) I use when out of home and for testing purposes before I hit the home server. While possible to add more RAM (at least another GB) it won't be probably necessary. What actually bothers me is if this issue is hardware or software related. My home server has the power for normal zfs operation (and runs amd64 FreeBSD) but if this problem is software related that won't help it :). Plus I don't have the spare storage space to juggle with creating new (working) zfs sets, moving data, destroying the old (hypothetically faulted) ones. I checked the freebsd-fs mailing list archive after the zfs bump date and noone reported such issues so I didn't expect trouble. Especially in the case where the zfs sets are not boot or system partitions, just some data storage.

Cheers
 
If the low-power machine is i386,v8, can you install /freecolor/ and run it to see if that program frees up enough memory for the zfs problem to subside? Works here to start xorg more reliably, ... and to free up memory for current v9 when it was compiled larger, and compilations soaked up memory so shutdown could not occur sometimes.
 
jb_fvwm2 said:
If the low-power machine is i386,v8, can you install /freecolor/ and run it to see if that program frees up enough memory for the zfs problem to subside?
Tried it and sadly nothing changes :\. Even with 93% free physical mem and virtually no system load the upgraded zfs sets behave no different while the new ones are just fine.
 
Any solution to this? Happened with my system as well but I have no partition left to write to. I just upgraded pool and zfs to v5 and v28 respectively and all I get is
Code:
cannot allocate memory

Edit: Fixed it for now by removing
Code:
vm.kmem_size="512M"                                                             
vm.kmem_size_max="640M"                                                         
vfs.zfs.arc_max="40M"                                                           
vfs.zfs.vdev.cache.size="5M"
from my /boot/loader.conf. Strange enough, I added these lines purposely to tune ZFS for my low memory (1 GB) environment.
 
Is this on i386 or amd64? On amd64 those settings would spell disaster because you're not supposed to set vm.kmem_size* that low, in fact they should not be changed at all from the defaults autodetected by the kernel.
 
No, I didn't do that, since it worked fine with these settings under 8.0 to 8.2. I wasn't aware of this limitation - perhaps this mismatch is somehow a reason for my problem? Currently the system runs smooth without anything tuned - haven't done any disk-intensive copying tasks until now though. But as long as it works like now, I'm ok with it :)
 
Last time I used ZFS on i386 setting vm.kmem_size_max higher than 512M on stock kernel resulted in non bootable system, that was with 8-STABLE I think. Set both vm.kmem_size and vm.kmem_size_max to 512M and it should work fine.
 
Back
Top