WE use ELK as a centralized logging solution. That means we have an ElasticSearch cluster, a LogStash Cluster, Kibana and Grafana.

I was looking for a GROK filter so we can export our Jasmin logs to Grafana and then be able to search based on different patterns through all the logs. Unfortunatelly I was not able to find a real working grok filter for jasmin. Jasmin SMS Gateway logs are basically in the standard syslog format and after a few failed attempts of trying to convert them in JSON format I decided that I should use them in the syslog format and find another way to read and import them in Grafana. The big issue with the logs is that some of Jasmin DEBUG logs are multiline so they have to be interpreted by Logstash using the multiline codec/filter.

1) Install logstash multiline filter

Be sure that you have java-openjdk and java-openjdk-devel installed (since the devel package contains the javac binary which is required by logstash to install the multiline filter)!

2) Install Filebeat from the ElasticSearch repo (I won’t detail this step since it’s pretty straightforward and you can find plenty of information on the elasticsearch web page). As soon as filebeat is installed, please configure it accordingly to forward the logs to your logstash server/cluster. That’s pretty much the default filebeat config, adapted for our environment. Here is the content of my /etc/filebeat/filebeat.yml file:

As you can see, I am setting the tag jasmin when sending the logs, just to be easier later when creating the logstash filter.
Be sure to enable filebeat and restart the service after changing the configuration!

3) Create an elasticearch index for jasmin logs. I am using this template (you can save it as jasmin-template.json for example) and then load it to elasticsearch using this command:

curl -XPUT ‘http://elastic-search-server-ip:9200/_template/jasmin’ -d@/path-to/jasmin-template.json

So now you have filebeat configured to send the jasmin sms gateway’s logs to logstash to an elasticsearch index called jasmin.

4) Next step is to create the grok filter for jasmin on the logstash server. So on the logstash server, via a ssh console, go to /etc/logstash/ and create a patterns file (which contains grok patterns). In my case, I have created a folder grok inside /etc/logstash and I named the file as patterns. Here is the content of the /etc/logstash/grok/patterns file:

Then go to /etc/logstash/conf.d and be sure you have a 02-beats-input.conf file with this content:

Also create the filter file for jasmin as 07-jasmin-filter.conf with this content:

First files takes input from the filebeat, 2nd files takes the logs from jasmin (both single lines and multilines, only where the tag is jasmin).

Do restart logstash after that.

In a future post, I’ll explain how to create a log browser in Grafana so you can explore your freshly exported Jasmin logs.


About Author

I am a linux passionate and currently working as a Linux Senior System Administrator. I also am a freelancer and help people to complete different jobs. You can hire me on Freelancer.com