DOCX

Build ELK centos7 open source real-time log analysis system

By Jack Gonzales,2015-10-07 23:01
26 views 0
Build ELK centos7 open source real-time log analysis system

    Elasticsearch is an open source distributed search engine, it has the characteristics of distributed zero configuration automatically discover copy index automatic subdivision index mechanism restful style interface more data source search automatically load, etc.

    Logstash is a completely open source tools he can log in your collection, analysis and stores it for later use, such as search.

    Kibana is an open source and free tools he kibana can log analysis for Logstash and ElasticSearch friendly Web interface can help you to collect, analyze and search important data log.

    Log from the client to the server after treatment in the passed to the customer's data stream flow is as follows

    Logstash - forwarder - > Logstash - > Elasticsearch - > kibana - > nginx -- - > the client browser

    The Logstash - forwarder is client log collection tools will be sent to the server log after Logstash Logstash using grok matching rules to match the cutting logging and then stored in Elasticsearch and go to read data from the Elasticsearch by kibana nginx to return to the customer after processing.

    Well this is a list of the ELK system installation process.

    Below is elasticsearch/logstash the JVM version

    First to install the JAVA environment

    wget --no-cookies --no-check-certificate --header "Cookie:

    gpw_e24=http%3A%2F%2Fwww.oracle.com%2F;

    oraclelicense=accept-securebackup-cookie"

    "http://download.oracle.com/otn-pub/java/jdk/8u65-b17/jdk-8u65-linux-x64.rpm" rpm -Uvh jdk-8u65-linux-x64.rpm

    Or directly yum install JDK line but also to ensure that the corresponding version

    installed.

    Of course can also be installed source but source need to pay attention to set up

    environment variables

    wget --no-cookies --no-check-certificate --header "Cookie:

    gpw_e24=http%3A%2F%2Fwww.oracle.com%2F;

    oraclelicense=accept-securebackup-cookie"

    "http://download.oracle.com/otn-pub/java/jdk/8u65-b17/jdk-8u65-linux-x64.tar.gz"

    tar zxvf jdk-8u65-linux-x64.tar.gz

    mv jdk1.8.0_65 java

vi /etc/profile

    JAVA_HOME="/usr/local/java"

    PATH=$JAVA_HOME/bin:$PATH

    CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export JAVA_HOME

    export PATH

    export CLASSPATH

    source /etc/profile

    After installed JDK environment need to install Elasticsearch rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch wget -c

    https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.2.noa

     rch.rpm

    rpm -ivh elasticsearch-1.7.2.noarch.rpm

    Modify the configuration file is as follows cd /usr/local/elasticsearch/

    vim config/elasticsearch.yml

    path.data: /data/db

    network.host: 192.168.100.233

    Elasticsearch plug-in installation is as follows cd /usr/share/elasticsearch/ && ./bin/plugin -install mobz/elasticsearch-head

    && ./bin/plugin -install lukas-vlcek/bigdesk/2.5.0

    Afterstartup Elasticsearch

    systemctl start elasticsearch

    Thenstart the installation kibana

    Go to https://www.elastic.co/downloads/kibana to find the right version of every version below with a single line so it is important to note that these Compatible with Elasticsearch 1.4.4-1.7

    Here I choose is kibana - 4.1.3 - Linux - x64. Tar. Gz

    wget https://download.elastic.co/kibana/kibana/kibana-4.1.3-linux-x64.tar.gz tar xf kibana-4.1.3-linux-x64.tar.gz

    mv kibana-4.1.3-linux-x64 /usr/local/kibana

    cd !$

    vim config/kibana.yml

    port: 5601

    host: "192.168.100.233"

    elasticsearch_url: "http://192.168.100.233:9200"

    In the configuration file specified kibana listen port 5601 and through 9200 port to get the data from that elasticsearch.

    To install nginx can choose source installation use yum to install here for the sake of convenience.

    yum -y install nginx

    vim /etc/nginx/nginx.conf

    Change the server to the following

    server {

     listen 80 default_server;

     listen [::]:80 default_server;

     server_name _;

     location / {

     proxy_pass http://192.168.100.233:5601;

     proxy_http_version 1.1;

     proxy_set_header Upgrade $http_upgrade;

     proxy_set_header Connection 'upgrade';

     proxy_set_header Host $host;

     proxy_cache_bypass $http_upgrade;

     }

    }

    To save the log format is modified to the following log_format main '$remote_addr - $remote_user [$time_local] "$request" '

    '$status $upstream_response_time $request_time $body_bytes_sent '

     '"$http_referer" "$http_user_agent" "$http_x_forwarded_for" "$request_body" '

     '$scheme $upstream_addr';

    Modify the log format is in order to match the back of the Logstash grok matching rules

    Start nginx and kibana

    systemctl start nginx

    nohup /usr/local/kibana/bin/kibana -l /var/log/kibana.log &

    Or you can look at the following two scripts

    cd /etc/init.d && curl -o kibana

    https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-init

    cd /etc/default && curl -o kibana

    https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-default

    About Kibana powered up.

    Then I need to install the Logstash

rpm --import https://packages.elasticsearch.org/GPG-KEY-elasticsearch

    vi /etc/yum.repos.d/logstash.repo

    [logstash-1.5]

     name=Logstash repository for 1.5.x packages

    baseurl=http://packages.elasticsearch.org/logstash/1.5/centos

    gpgcheck=1

    gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch

    enabled=1

    yum -y install logstash

    May the package is too big domestic download up slowly can go to the website to use thunderbolt download faster.

    Create the TLS certificate

    Logstash and logstash - forwarder communication need to use the TLS certificate authentication.Logstash Forwarder just above the public key Logstash need to configure the public key and private key.In logstash generate the SSL certificate on the server.

    Create an SSL certificate there are two ways to a specified IP address a specified FQDN (DNS).

    1, specify the IP address

    vi /etc/pki/tls/openssl.cnf

    Under the [v3_ca] subjectAltName = IP configuration: 192.168.100.233 bear in mind that this is important because there is also a place also has a subjectAltName configuration wrong would have been unable to achieve certification cd /etc/pki/tls

    openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

    Pay attention to the set - days bigger to avoid certificate has expired.

    2, use the FQDN

    Don't need to modify the openssl. CNF file.

    cd /etc/pki/tls

    openssl req -subj '/CN=logstash.abcde.com/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

    To replace logstash.abcde.com with your own domain name.At the same time to the domain name resolution that add A record of logstash.abcde.com.

    Use that way is fine but if logstash server IP address change the certificate is not available.

    Configuration logstash

    Logstash config file in json format set parameters configuration files in the/etc/logstash/conf. D directory configuration includes three parts of the input and output filter.

    First create a 01 - lumberjack - input. The conf file Settings lumberjack input Logstash - Forwarder use agreement.

    vi /etc/logstash/conf.d/01-lumberjack-input.conf

    input {

     lumberjack {

     port => 5043

     type => "logs"

     ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"

     ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"

     }

    }

    To create a 02 - nginx. Conf used to filter nginx log

    vi /etc/logstash/conf.d/02-nginx.conf

    filter {

     if [type] == "nginx" {

     grok {

     match => { "message" => "%{IPORHOST:clientip} - %{NOTSPACE:remote_user} \[%{HTTPDATE:timestamp}\]

    \"(?:%{WORD:method} %{NOTSPACE:request}(?: %{URIPROTO:proto}/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:status}

    (?:%{NUMBER:upstime}|-) %{NUMBER:reqtime}

    -) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{QS:reqbod(?:%{NUMBER:size}|

    y} %{WORD:scheme} (?:%{IPV4:upstream}(:%{POSINT:port})?|-)" }

     add_field => [ "received_at", "%{@timestamp}" ]

     add_field => [ "received_from", "%{host}" ]

     }

     date {

     match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]

     }

     geoip {

     source => "clientip"

     add_tag => [ "geoip" ]

     fields => ["country_name", "country_code2","region_name", "city_name", "real_region_name", "latitude", "longitude"]

     remove_field => [ "[geoip][longitude]", "[geoip][latitude]" ]

     }

     }

    }

    This filter will be looking for is marked as "nginx" type Logstash - log of the definition of forwarder try to use "grok" to analyze the incoming nginx log structured

    and can query.

    The type to match the logstash - forwarder.

    At the same time pay attention to set the nginx log format into the above.

    Log format wrong grok matching rules to rewrite.

    Can through the http://grokdebug.herokuapp.com/ online tools for debugging.ELK probably didn't data errors here.

    Grok matching log unsuccessful don't look down.Make the first.

    At the same time look at http://grokdebug.herokuapp.com/patterns# grok pattern

    behind to write rules match very benefit.

    Finally to create a file to define the output.

    vi /etc/logstash/conf.d/03-lumberjack-output.conf

    output {

     if "_grokparsefailure" in [tags] {

     file { path =>

    "/var/log/logstash/grokparsefailure-%{type}-%{+YYYY.MM.dd}.log" }

     }

     elasticsearch {

     host => "10.1.19.18"

     protocol => "http"

     index => "logstash-%{type}-%{+YYYY.MM.dd}"

     document_type => "%{type}"

     workers => 5

     template_overwrite => true

     }

     #stdout { codec =>rubydebug }

    }

    Define a structured logging stored to elasticsearch don't match to grok log write

    to file.

    Note added behind the filter file name to the 01-99.Because the logstash config

    file with the order.

    When debugging the first not to log in to elasticsearch but standard output to debug.

    See more at the same time log many errors in the log is also easy to locate where is wrong.

    Beststart logstash service before a test configuration file is as follows /opt/logstash/bin/logstash --configtest -f /etc/logstash/conf.d/*

    Configuration OK

    Can also specify a file name to detect until OK.Otherwise the logstash server up.

    The last is tostart the logstash service.

    systemctl start logstash

    And then configure Logstash - forwarder client.

    Install the logstash - forwarder

    wget

    https://download.elastic.co/logstash-forwarder/binaries/logstash-forwarder-0.4.-1.x86_64.rpm 0

    rpm -ivh logstash-forwarder-0.4.0-1.x86_64.rpm

    When need to install the logstash create copies of the SSL certificate of public key to each logstash - forwarder server.

    scp 192.168.100.233:/etc/pki/tls/certs/logstash-forwarder.crt

    /etc/pki/tls/certs/

    Configuration logstash - forwarder

    vi /etc/logstash-forwarder.conf

    {

     "network": {

     "servers": [ "10.1.19.18:5043" ],

     "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt",

     "timeout": 30

Report this document

For any questions or suggestions please email
cust-service@docsford.com