Logstash is a powerful tool for centralizing and analyzing logs, which can help to provide and overview of your environment, and to identify issues with your servers. One way to increase the effectiveness of your ELK Stack (Elasticsearch, Logstash, and Kibana) setup is to collect important application logs and structure the log data by employing filters, so the data can be readily analyzed and query-able. We will build our filters around “grok” patterns, that will parse the data in the logs into useful bits of information.
This guide is a sequel to the How To Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04 tutorial, and focuses primarily on adding Logstash filters for various common application logs.
To follow this tutorial, you must have a working Logstash server that is receiving logs from a shipper such as Filebeat. If you do not have Logstash set up to receive logs, here is the tutorial that will get you started: How To Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04.
/opt/logstash
/etc/logstash/conf.d
02-beats-input.conf
30-elasticsearch-output.conf
You may need to create the patterns
directory by running this command on your Logstash Server:
- sudo mkdir -p /opt/logstash/patterns
- sudo chown logstash: /opt/logstash/patterns
If your setup differs, simply adjust this guide to match your environment.
Grok works by parsing text patterns, using regular expressions, and assigning them to an identifier.
The syntax for a grok pattern is %{PATTERN:IDENTIFIER}
. A Logstash filter includes a sequence of grok patterns that matches and assigns various pieces of a log message to various identifiers, which is how the logs are given structure.
To learn more about grok, visit the Logstash grok page, and the Logstash Default Patterns listing.
Each main section following this will include the additional configuration details that are necessary to gather and filter logs for a given application. For each application that you want to log and filter, you will have to make some configuration changes on both the client server (Filebeat) and the Logstash server.
If there is a Logstash Patterns subsection, it will contain grok patterns that can be added to a new file in /opt/logstash/patterns
on the Logstash Server. This will allow you to use the new patterns in Logstash filters.
The Logstash Filter subsections will include a filter that can can be added to a new file, between the input and output configuration files, in /etc/logstash/conf.d
on the Logstash Server. The filter determine how the Logstash server parses the relevant log files. Remember to restart the Logstash service after adding a new filter, to load your changes.
Filebeat Prospectors are used specify which logs to send to Logstash. Additional prospector configurations should be added to the /etc/filebeat/filebeat.yml
file directly after existing prospectors in the prospectors
section:
filebeat:
# List of prospectors to fetch data.
prospectors:
-
- /var/log/secure
- /var/log/messages
document_type: syslog
-
paths:
- /var/log/app/*.log
document_type: app-access
...
In the above example, the red highlighted lines represent a Prospector that sends all of the .log
files in /var/log/app/
to Logstash with the app-access
type. After any changes are made, Filebeat must be reloaded to put any changes into effect.
Now that you know how to use this guide, the rest of the guide will show you how to gather and filter application logs!
Nginx log patterns are not included in Logstash’s default patterns, so we will add Nginx patterns manually.
On your ELK server, create a new pattern file called nginx
:
- sudo vi /opt/logstash/patterns/nginx
Then insert the following lines:
NGUSERNAME [a-zA-Z\.\@\-\+_%]+
NGUSER %{NGUSERNAME}
NGINXACCESS %{IPORHOST:clientip} %{NGUSER:ident} %{NGUSER:auth} \[%{HTTPDATE:timestamp}\] "%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}" %{NUMBER:response} (?:%{NUMBER:bytes}|-) (?:"(?:%{URI:referrer}|-)"|%{QS:referrer}) %{QS:agent}
Save and exit. The NGINXACCESS
pattern parses, and assigns the data to various identifiers (e.g. clientip
, ident
, auth
, etc.).
Next, change the ownership of the pattern file to logstash
:
- sudo chown logstash: /opt/logstash/patterns/nginx
On your ELK server, create a new filter configuration file called 11-nginx-filter.conf
:
- sudo vi /etc/logstash/conf.d/11-nginx-filter.conf
Then add the following filter:
filter {
if [type] == "nginx-access" {
grok {
match => { "message" => "%{NGINXACCESS}" }
}
}
}
Save and exit. Note that this filter will attempt to match messages of nginx-access
type with the NGINXACCESS
pattern, defined above.
Now restart Logstash to reload the configuration:
- sudo service logstash restart
On your Nginx servers, open the filebeat.yml
configuration file for editing:
- sudo vi /etc/filebeat/filebeat.yml
Add the following Prospector in the filebeat
section to send the Nginx access logs as type nginx-access
to your Logstash server:
-
paths:
- /var/log/nginx/access.log
document_type: nginx-access
Save and exit. Reload Filebeat to put the changes into effect:
- sudo service filebeat restart
Now your Nginx logs will be gathered and filtered!
Apache’s log patterns are included in the default Logstash patterns, so it is fairly easy to set up a filter for it.
Note: If you are using a RedHat variant, such as CentOS, the logs are located at /var/log/httpd
instead of /var/log/apache2
, which is used in the examples.
On your ELK server, create a new filter configuration file called 12-apache.conf
:
- sudo vi /etc/logstash/conf.d/12-apache.conf
Then add the following filter:
filter {
if [type] == "apache-access" {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
}
}
Save and exit. Note that this filter will attempt to match messages of apache-access
type with the COMBINEDAPACHELOG
pattern, one the default Logstash patterns.
Now restart Logstash to reload the configuration:
- sudo service logstash restart
On your Apache servers, open the filebeat.yml
configuration file for editing:
- sudo vi /etc/filebeat/filebeat.yml
Add the following Prospector in the filebeat
section to send the Apache logs as type apache-access
to your Logstash server:
-
paths:
- /var/log/apache2/access.log
document_type: apache-access
Save and exit. Reload Filebeat to put the changes into effect:
- sudo service filebeat restart
Now your Apache logs will be gathered and filtered!
It is possible to collect and parse logs of pretty much any type. Try and write your own filters and patterns for other log files.
Feel free to comment with filters that you would like to see, or with patterns of your own!
If you aren’t familiar with using Kibana, check out this tutorial: How To Use Kibana Visualizations and Dashboards.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.
This series will teach you how to install Logstash and Kibana on Ubuntu, then how to add more filters to structure your log data. Then it will teach you how to use Kibana.
Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame. This series will teach you how to install Logstash and Kibana on Ubuntu, then how to add more filters to structure your log data. Then it will teach you how to use Kibana.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Hi. I am trying to troubleshoot logstash not collecting nginx logs. I followed the tutorial on setting up elasticsearch, kibana and logstash now this one, but the nginx logs don’t seem to be flowing through.
Is this logstash-forwarder config correct?
Thanks Jock
@jock.forrester: There’s a missing comma between the two “paths” objects. Try using this instead:
Hi, I’m using cisco ASA 5505. When i enter to /opt/logstash/patterns/firewalls i dont find the ASA 5505. Also i cant change the option … All I want is to have ip source ; ip destination ; port source ; port destination as field in kabana. Thanks
@sammdoun post a sample log
{“message”:“<166>Aug 20 2014 05:51:34: %ASA-6-302014: Teardown TCP connection 8440 for inside:192.168.2.209/51483 to outside:104.16.13.8/80 duration 0:00:53 bytes 13984 TCP FINs\n”,“@version”:“1”,“@timestamp”:“2014-08-20T14:17:58.452Z”,“host”:“192.168.2.1”,“tags”:[“_grokparsefailure”],“priority”:13,…
@sammdoun: Assuming your message is (and the rest of the relevant logs are similar):
The following pattern should match and name the fields you specified:
I’m assuming the first IP/port is source and the second is destination.
ok :) but how i can add them as field in my kibana
Can you see anything wrong with my logstash-forwarder config? Logs are not sent when configured as follows;
Logs are sent ok when it is configured as follows;
@manicas ! thank you verry much ! i’m working on SIEM project and you really help me. Actually the system used to work two weeks ago, but now i have an error message which is " Oops! SearchPhaseExecutionException[Failed to execute phase [query], all shards failed]" i think it’s due to server private ip adress, because it’s dhcp, i’ve generated another ssl certificate, with the new adress, but i still have this error ! can you help me please !
How to install plugins? bin/plugin not inside elasticsearch directory.