ZFS Kernel panic after upgrading to 11.1

Hello FreeBSD forum!

I have a backup solution for some Windows Servers backed by some iSCSI zfs zvols I use ctld/ctladm to provision the disks to Windows, and I'm using:
Code:
option pblocksize 0 option unmap on
on ctld.conf.

The FreeBSD box was originally a 11.0, after upgrading to 11.1-RELEASE-p1, it started to panic regularly. The server hardly get 1 day uptime after this upgrade. I thought it could be something with my deploy, so I rebuilt the system from scratch and the issue persisted. All the crashes point to the same zfs error:

Code:
Dump header from device: /dev/da0p2
  Architecture: amd64
  Architecture Version: 2
  Dump Length: 5561098240
  Blocksize: 512
  Dumptime: Fri Sep 15 23:03:17 2017
  Hostname: F01833PAPP0
  Magic: FreeBSD Kernel Dump
  Version String: FreeBSD 11.1-RELEASE-p1 #0: Wed Aug  9 11:55:48 UTC 2017
    root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC
  Panic String: Solaris(panic): zfs: allocating allocated segment(offset=42595471360 size=36864)

  Dump Parity: 1614276210
  Bounds: 1
  Dump Status: good

I was looking for some way to analyse the vmcore dump with kgdb but got no luck and having a really bad time.

The process executed by Windows is to copy some files, proccess them and then delete old entries. I'm suspecting this could have something with Windows getting high loads and sending the unmap command to iSCSI before it actually completes/commits the operation. And this could be related to the compressed zfs arc too, since it's one of the big changes I noticed between 11.0 and 11.1.

Do you have any advice or tips on how to start debugging this problem?

Thanks a lot!
 
It could be trying to allocate too much memory on some I/O operations. Try adjusting sysctl vfs.zfs.free_max_blocks to some sane level like 100000.
 
Hi Swegen, thanks for your reply!

I changed this parameter but the server crashed again, this time without generating the vmcore dump. Something I noticed is that the 11.1 server was using swap, even some minutes after a boot/crash, and I never saw 10.3 using any swap, even under very high loads/memory stress caused by zfs, and this parameter is at its default value.

The 11.1 FreeBSD server is a 8vCPU, 64GB ram VM hardware 11, running on VMware ESXi 6.0U2, with latest open-vm-tools from ports. I have another FreeBSD server in this VMware cluster with same hardware configuration doing pretty the same job for another backup files, but this server is a FreeBSD 10.3-RELEASE and never experienced any kernel panic too, neither I changed any parameters (I try to keep default values stock). This 10.3 server have actually 60 days of uptime since latest reboot, but the "record" is 281 days.

I'm uploading my sysctl -a and zfs-stats -a parameters for comparison. Is there any other parameters that I should be looking?

Again, thanks a lot for your time and your help!
 

Attachments

  • zfs-stats.txt
    9.7 KB · Views: 725
  • sysctl.zip
    131.6 KB · Views: 353
I recall having daily stability problems on VMware with asynchronous I/O enabled. But this was with FreeBSD 11.0-RELEASE and Nginx. After disabling AIO there were no more crashes.
 
Hi Swegen, thanks again for your reply!

You're right; I asked about this server and was told that when it was 11.0, it was in pre-production. After the update to 11.1-RELEASE-p1, it went into production and the problems started.

I researched a bit today and found a scenario pretty much like you described, an Nginx server with UFS on Virtualbox having the same kind of issues caused by aio. There's an open bug for this: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=168298 (seems to happen since 8.3). Do you think it worth a try to open a new bug for VMware? Based on this and other sources, I'll give RDM disks a try too.

Also, I tried to disable the aio at /etc/sysctl.conf with no luck. Is there any way to accomplish this without recompiling the kernel?

Thanks in advance, best regards!
 
I think that bug report pretty much describes your problem. It seems that heavy disk/aio usage triggers the crash and thats why it didn't happen in pre-production.

Since 11.0-RELEASE aio is no longer a kernel module as it's integrated into kernel. So completely disabling it is harder without recompiling but setting these sysctl's might be worth a try:

Code:
kern.ipc.aio.max_procs=0
vfs.aio.max_aio_per_proc=0
vfs.aio.max_aio_queue=0
vfs.aio.target_aio_procs=0
vfs.aio.max_aio_procs=0
 
Does the Wired memory on that box increase up until it crashes? There's a bug in the way ZFS handles memory objects for IO that was introduced very very recently (ZFS ABD memory issues or something along those lines). It definitely affects 11-STABLE, possibly 11.1. There's a bug report for it, a couple threads on the mailing lists about it, and a couple patches to workaround it until a proper fix is added.

I'll see if I can dig up the references from e-mail. Here's one of the bug reports.

Edit: Oh, wait, that bug only applies to 11-STABLE, it's not present in 11.1, so it's probably not your issue.
 
I think that bug report pretty much describes your problem. It seems that heavy disk/aio usage triggers the crash and thats why it didn't happen in pre-production.

Since 11.0-RELEASE aio is no longer a kernel module as it's integrated into kernel. So completely disabling it is harder without recompiling but setting these sysctl's might be worth a try:

Code:
kern.ipc.aio.max_procs=0
vfs.aio.max_aio_per_proc=0
vfs.aio.max_aio_queue=0
vfs.aio.target_aio_procs=0
vfs.aio.max_aio_procs=0


Hi Swegen,

I'll try your suggestion and post the results, but I'm still concerned about "kern.features.aio" and the "collateral damage" that disabling these features may do, since aio is compiled from default into kernel, I don't know how this could affect other aspects of the system.

Meanwhile, I'm setting up two more servers to reproduce the scenario and test the workarounds.

One server will be a FreeBSD 11.1-RELEASE-p1 "stock" with RDM disks, the other will have your suggestions, and the problematic one (the one who started this thread) will have the sysctl parameters suggested on the last comment of the bugzilla I posted. We already redirected this server load to another one with FreeBSD 10.3-RELEASE. I'll keep them running for a couple of days to see the results.

I'll keep you guys here updated.

Again, I appreciate your time and help!

Best regards!
 
Does the Wired memory on that box increase up until it crashes? There's a bug in the way ZFS handles memory objects for IO that was introduced very very recently (ZFS ABD memory issues or something along those lines). It definitely affects 11-STABLE, possibly 11.1. There's a bug report for it, a couple threads on the mailing lists about it, and a couple patches to workaround it until a proper fix is added.

I'll see if I can dig up the references from e-mail. Here's one of the bug reports.

Edit: Oh, wait, that bug only applies to 11-STABLE, it's not present in 11.1, so it's probably not your issue.

Hi Phoenix!

First of all, thanks for your repply!

The server doesn't have any monitoring solution but a ping down, so when I was notified the server crashed, it's already on the crash dump process. I searched core.txt.* files in /var/crash but didn't found any information about Wired memory status of the server.

But these servers generally run with a large amount of memory Wired, about 90~95% of memory server, of what about 90~95% is ARC allocated. But there's usually 7~8GB RAM "free" (Total installed RAM - Wired), since the servers have at least 64GB RAM. I don't know if this is a issue, but we don't experience any crashes on 10.3-RELEASE.

I'll setup the test environment like I described at Swegen reply and post the results after.

Still, I think it could worth a try opening a bug for this case, since aio is integrated into the kernel in FreeBSD 11, I'm concerned about the "collateral damage" that could be caused disabling async io, specially if I rebuild the kernel without it.

Thank you very much for your time and your help!
 
Wired, about 90~95% of memory server, of what about 90~95% is ARC allocated.
From FreeBSD11.x onwards ARC is compressed by default. I have had issues with zfs after upgrading from FreeBSD10.3 to FreeBSD11.x as well. The compression can be disabled in /boot/loader.conf by
Code:
vfs.zfs.compressed_arc_enabled="0"
For my system the issues have disappeared.
 
Hi chrbr, thanks for your reply!

My last post was not clear, we get this memory behavior (95% Wired, and 95% of Wired is ARC) since FreeBSD 10.3-RELEASE, and it seems to be the same on FreeBSD 11.x as well, besides FreeBSD 11 swaps a little, but this isn't a problem in FreeBSD 10.3 at all (some servers have 200+ days uptime).

I'll try to disable ARC compression, but it'll be a shame if it become a workaround, since it's a nice feature I was looking after.

I have some updates from the other topics.

Double checking the VMware HCL, I've found that FreeBSD 11.x is only supported on ESXi 6.5. We'll setup an testing environment with 6.5 and retry all scenarios described before. In the mean time, the problematic box with increased parameters from bugzilla (vfs.aio.max_aio_queue, vfs.aio.max_aio_queue_per_proc, vfs.aio.max_aio_per_proc, vfs.aio.max_buf_aio) already crashed, so we're setting up one FreeBSD 11-RELEASE-p1 stock with VMware RDM disks and another with swegen suggestions but kern.ipc.aio.max_procs, because we're leaving the kernel recompile without aio option as a last resort, since we're concerned about the collateral damage.

Again, thank you guys for your time and your help!
 
Dear carlossrp,
I have just seen the recommendation below in the ports mailing list from today. The subject is "portmaster, portupgrade, etc".
Code:
portupgrade (or analog) plus ZFS compressed svn-updates ports tree is lightest option in practice:

zfs_load="YES"
vfs.zfs.arc_max="40M"
vfs.zfs.vdev.cache.size="8M"
vfs.zfs.prefetch_disable="1"
vfs.zfs.vdev.trim_on_init="0"
vfs.zfs.compressed_arc_enabled="1"
I am curious if this would fix your issues since it keeps ARC compression enabled.
 
Back
Top