HELP NEEDED: kernel: swap_pager: indefinite wait buffer

To me spreading it out over 8 drives like the OP seems like a problem waiting to happen.

Why is that? Kernel not handling multiple swap devices correctly or what? Any docs to back your worries up?

If I was using only-ZFS I would consider adding a zpool (or zvol ??) for swap.
To me that seems logical.

I remember reading somewhere (quick search found https://forums.freebsd.org/threads/swap-and-zfs.30298/ as one of many examples) that swap on ZFS is not the best idea. Unless something changed recently.
 
I was reviewing your graphs and this sticks out to me.
Physical Memory =33% used. Not bad but the amount seems concerning. 22GB of RAM consumed.
How much of that is allocated to your two VM's?

Sorry, what VMs are you talking about?

I'm only running ctld to serve iSCSI storage at the minute, everything else is disabled.
 
I have to ask this question too. Why two M1015 controllers. With an 8 drive arrangement one card would do.
Then put your SSD on the motherboard SATA3.
If you were running an array of SSD drives I would understand. Dual Path backplane too.
The M1015 is only PCIe 2 with x8 interface. 8 SSDs will soak the interface. Been there done that.
With today's rotating disks you cannot saturate that interface.

To me it seems sacrilegious to put a PCIe 2.0 card on a SM X11 board. The SAS3008 are not much more.

My 2U chassis has 12 disk bays and 3 mini SAS connectors on the backplane. Even though I currently have 8 disks populated, I'm planning to add 4 more in near future.

The system board comes only with SATA ports hence I'm unable to wire them to the SAS backplane.
 
Just for my info how many drives are you spreading your gmirror swap over?
The GELI partitions are authenticated only, no encryption. I don't care about disk performance. I do care about data integrity. I have 3 variants:
1. A single piece of swap on top of GMIRROR containing 2 GELI partitions on 2 disks, ie RAID1.
2. A single piece of swap on top of GSTRIPE + GMIRROR + 4 GELI on 4 disks, ie RAID10.
3. A single piece of swap on top of GELI.

My reasons:
1 & 2 because a GELI failure due to bit rot degrades the mirror, triggering a devd event, which sends me an email, which means I replace that disk, but the box keeps operating.
3 because I want to know if any swap is corrupted, although that requires scraping /var/log/messages.

I never use ZFS because I've always had bad experiences with it, contrary to most other folk it seems.

I tried running InfluxDB which telegraf is part of, on a box months ago and found it ate memory with no regard for limits. Large queries just ate all memory, all swap and then crashed. As it's a monolithic application that lost the ability to store data at that point.
 
Any docs to back your worries up?

https://www.freebsd.org/doc/handbook/bsdinstall-partitioning.html
On larger systems with multiple SCSI disks or multiple IDE disks operating on different controllers, it is recommended that swap be configured on each drive, up to four drives. The swap partitions should be approximately the same size. The kernel can handle arbitrary sizes but internal data structures scale to 4 times the largest swap partition. Keeping the swap partitions near the same size will allow the kernel to optimally stripe swap space across disks. Large swap sizes are fine, even if swap is not used much. It might be easier to recover from a runaway program before being forced to reboot.
 
Back
Top