What web interface for sysutils/rsyslog8 exist in ports?

Good day!

I have a FreeBSD 11.1 + Rsyslog + MySQL installation. And I need a web interface for log view. But almost all manuals suggests loganalyser for this. But it doesn't exist in ports. Maybe deleted some time ago?

So what software I can use for centralize log view through web interface?
 
Last edited by a moderator:
Ok, I'm manually download tar with loganalyzer and install it. It just php application which can be installed without ports.
 
FYI, I ended up having rsyslogd write to a Postgresql database, which I am querying remotely via pgpadmin3 with simple queries. I can sort/search easily enough with this. Later I plan to write some triggers to send out mail for things I need an immediate response to and add automatic record deletion.

The syslog server is receiving syslog from about 10-20 switches/hosts, so it's not heavily used. I only have 256MB/1cpu allocated to it. So far it's using a whopping 146MB.
 
Good day!

I have a FreeBSD 11.1 + Rsyslog + MySQL installation. And I need a web interface for log view. But almost all manuals suggests loganalyser for this. But it doesn't exist in ports. Maybe deleted some time ago?

So what software I can use for centralize log view through web interface?
Echofish of course

https://echothrust.github.io/echofish/

if you want to go the that route but IMHO you are doing everything wrong. Where to begin? Firstly rsyslog is Linux login daemon which should not be used on other OSs. Personally I run syslog-ng server but for small installations OpenBSD's syslog server is really sufficient. FreeBSD syslog sucks both as a client and particularly as a server. Secondly SQL database is ill-suited to keep the log data which is frequently recorded. Just for the sake of compassion with ELK mentioned by SirDice notice that Logstash is used data collection so rsyslog, syslog-ng, syslog are not necessary. ELK also uses Elasticsearch cluster to store log files. Finally Kibana is used to display data. Whole ELK stuff is pretty nifty thing to impress upper management and keep the job but it is useless.

https://wikitech.wikimedia.org/wiki/Logstash

Namely the real problem is not collecting the log files (syslog-ng) does a good job but parsing them and making the sense of data. The only product that addresses this problem seriously is Splunk

https://www.splunk.com/

As a matter of disclosure I will say that bunch of my former classmates from PhD math studies work there. Splunk cost lot of money so what can a person with 0 budget do short of becoming an expert in anomaly detection (Machine Learning) and writing its own application? Not much if you ask me.

You can look into ELK but you will find out that you have to teach it how to parse the logs to
extract any useful information. You can try fluentd and you will end up with the same shit. Quickly you will come to the conclusion that if you have to teach the tool how to read a log, you might just write your own
damn thing. You can use a perl script to mask out the stuff you don't care about, keeping track of how many times they were seen. You get a report of new log lines not in the ignore list and how many times they were seen (there is some scrubbing of unique data like PIDs and session IDs so I get a useful count), and any ignored lines that didn't fall into the expected range of counts.

Pushing that into a database for historic info and visualization wouldn't be too hard.

Check out the following links and the ideas found here:
http://undeadly.org/cgi?action=article&sid=20091215120606
http://www.ranum.com/security/computer_security/papers/ai/
And much more info:
http://www.ranum.com/security/computer_security/archives/logging-notes.pdf

Personally my interest is in UNIX system logs and IDS/IPS events, with full packet captures. The simplest form I have used is with automated processing of IDS events, firewall logs, and full pcap data as static files shared on a webserver. I would be interested in a CLI log viewer with ncurses, or scripted output (maybe using pipecut to process data as you search for what you want in the simplest UNIX way).
 
The syslog server is receiving syslog from about 10-20 switches/hosts, so it's not heavily used. I only have 256MB/1cpu allocated to it. So far it's using a whopping 146MB.
One of my clients uses a central syslog server. One single firewall cluster is producing 25GB of logging, per day. It's a good thing it's all text and therefor highly compressible but when your network gets bigger and bigger this way of logging scales rather poorly.

You can look into ELK but you will find out that you have to teach it how to parse the logs to extract any useful information.
Yeah, this is the tricky part. Setting up an ELK stack by itself is fairly easy. It's extracting the right bits of information that's going to take most of the time and resources.
 
graylog2 is in ports, works fine jailed and behind haproxy + auth / tls. I have notes on installing it via ansible for $DAYJOB if it's not possible to muddle through it. I consolidate ~ 7 hosts with a lot of jails into a single instance. I added a ZIL to help it keep up with log ingestion, watch your disk io.
 
Thank guys for yours answers! Oko, your detailed explanation in log collecting and analyzing is very useful and cognitively.
My needs to log collecting is just see messages from various SOHO network devices, and therefore I want try firstly max simple tools.

dch, graylog is MongoDB, Elasticsearch, java based solution... Too complex and hard for my needs.
I'm really don't understand why there is not exist a simple tool for view a 1 tabled sql data with maximum 3 search keys :)
 
goshanecr graylog is actually super easy to set up, and its searchability for logs is awesome. I suspect the reason there's no simple tool, is that grep handles that case reasonably well already, and most installations will require something that has a bit more flexibility than grep. For example, yesterday 2 of our 3rd party APIs fell over, and we needed to dig out all the failing transactions for troubleshooting. Every day, a different search with different requirements.

You may find rolling your own simple solution is easy http://www.rsyslog.com/doc/v8-stable/tutorials/database.html should give you some ideas. Please consider writing it up when you get something working!
 
The only product that addresses this problem seriously is Splunk
https://www.splunk.com/

.......Splunk cost lot of money .......

Splunk is awesome as far as I am concerned. For any organization that has a mountain of syslog this is the machine to consider using.

Now Oko, you should ask your freinds to port Splunk to FreeBSD. That would be a match made in heaven. Of course you would still need $$$ for license. :p
 
Back
Top