Managing log files from multiple jails and multiple services

I have a tower server running multiple jails and multiple services.
This server also runs Win10 in a VirtualBox.

The tower is in an open cabinet (it used to be a closed cabinet but the Dell T420 is too long to close the door now ;)).

The issue is every 2-5 seconds the hard disk writes/reads which makes it annoying.

I checked with top and it looks like most of the writing/reading coming from log files (nginx, etc.) and VirtualBox (no idea what it's writing/reading).

Is there any way to compile all logging (jails and host) into a single location?

How anyone else is managing the log monitoring in your servicers with multiple jails/services.
I just started using sysutils/syslog-ng and it works well.
I setup my Munin box (system monitoring graphs) to host network wide syslog-ng logging.
I am very happy. I used lighttpd and dir-listing and have a simple web interface of all my networks log files.
If your worried about disk thrashing you could use a memdisk instead of /var.
My syslog-ng setup deletes logs older than a week. There are also the original logs on each machine.


You can set up another jail to collect all syslog. Point all your jails to it. You can use syslogd in the base system, syslog-ng, or rsyslogd.

Memory disks are not great for logs because something unfortunate as a power failure will result in the loss of all logs -- assuming you have some kind of script to copy them to a permanent location upon shutdown.

To address the constant syncing of the disk you could mount the UFS disk async or set the ZFS sync property for the dataset to disabled. But, syslog daemons tend to issue fsync() to make sure the data is written to disk. This may not be enough.

In my case on my firewall, except for ipfilter logs, my syslog isn't all that active. I configured ipmon(8) to write to a file instead of syslog, configuring the ZFS dataset with sync=disabled reducing the chatter somewhat. The problem is that there are too many port scans happening against my IPs. This causes active logs which tend to get somewhat large at times.

Logging directly from some apps to files avoids the fsync() that is issued by syslogd (and other syslogd's). The reason is that syslogd and friends try to avoid the loss of data. If you don't mind losing a record here or there due to power failures or system crashes this approach should work.

BTW, my firewall is on a UPS. The only time it loses logs is when there's a panic, which might happen once or twice a year , because I run -CURRENT.