ELK7.4.0分析nginx json日志

时间:2019-11-30 16:24:37   收藏:0   阅读:133

ELK7.4.0单节点部署

环境准备

useradd elk;
mkdir /srv/{app,data,logs}/elk
chown -Rf elk:elk /srv/{app,data,logs}/elk
*  soft  nofile 65536
*  hard  nofile 65536
*  soft  nproc  65536
*  hard  nproc  65536

elk  soft  nofile 65536
elk  hard  nofile 65536
elk  soft  nproc  65536
elk  hard  nproc  65536

安装elk过程所有操作必须使用elk账户进行!

su - elk

Elasticsearch

这次先用的是单节点的ES,没有部署集群,集群部署的后续会更新

首先我们需要从官网下载最新的es安装包,这里建议使用tar包安装;

cd /srv/app/elk;
wget http://172.19.30.116/mirror/elk/elasticsearch/7.4.0/elasticsearch-7.4.0-linux-x86_64.tar.gz
tar -zxvf elasticsearch-7.4.0-linux-x86_64.tar.gz
mv elasticsearch-7.4.0-linux-x86_64.tar.gz elasticsearch
cluster.name: es-cluster
node.name: es-1
node.master: true     #允许为master节点
node.data: true       #允许为数据节点
path.data: /srv/data/elk/elasticsearch   #设置数据目录
path.logs: /srv/logs/elk/elasticsearch   #设置日志目录

network.host: 127.0.0.1     #仅允许本地访问,如要其它网段访问,可以设置为网段地址,也可以直接写成0.0.0.0
http.port: 9200             #http端口,默认为9200

http.cors.enabled: true
http.cors.allow-origin: "*"
xpack.security.enabled: false
-Xms4g
-Xmx4g
/srv/app/elk/elasticsearch/bin/elasticsearch -d

Kibana

cd /srv/app/elk;
wget http://172.19.30.116/mirror/elk/kibana/7.4.0/kibana-7.4.0-linux-x86_64.tar.gz
tar -zxvf kibana-7.4.0-linux-x86_64.tar.gz
mv kibana-7.4.0-linux-x86_64 kibana
server.port: 5601
server.host: "localhost"   #也可以直接写成0.0.0.0
server.name: "kibana"
elasticsearch.hosts: ["http://127.0.0.1:9200"]
i18n.locale: "en"     #如果要开启中文可以改成zh-CN
/srv/app/elk/kibana/bin/kibana

logstash

cd /srv/app/elk;
wget http://172.19.30.116/mirror/elk/logstash/7.4.0/logstash-7.4.0.tar.gz
tar -zxvf logstash-7.4.0.tar.gz
mv logstash-7.4.0 logstash
-Xms1g
-Xmx1g

至此,ELK集群已经部署完成了,现在我们需要准备我们的Redis和filebeat了,redis用来做日志的暂存队列,filebeat收集nginx或者其他应用的日志

REDIS

yum install epel-release -y
yum install redis* -y
chkconfig redis on
service redis start

Filebeat

在nginx节点上安装filebeat,修改nginx的log_format,新增nginxjson,并让日志引用这个格式的日志,可以参考这篇博客:

log_format nginxjson '{"@timestamp":"$time_iso8601",'
                  '"host":"$server_addr",'
                  '"service":"nginx",'
                  '"trace":"$upstream_http_ctx_transaction_id",'
                  '"clientip":"$remote_addr",'
                  '"remote_user":"$remote_user",'
                  '"request":"$request",'
                  '"url":"$scheme://$http_host$request_uri",'
                  '"http_user_agent":"$http_user_agent",'
                  '"server_protocol":"$server_protocol",'
                  '"size":$body_bytes_sent,'
                  '"responsetime":$request_time,'
                  '"upstreamtime":"$upstream_response_time",'
                  '"upstreamhost":"$upstream_addr",'
                  '"http_host":"$host",'
                  '"domain":"$host",'
                  '"xff":"$http_x_forwarded_for",'
                  '"x_clientOs":"$http_x_clientOs",'
                  '"x_access_token":"$http_x_access_token",'
                  '"referer":"$http_referer",'
                  '"status":"$status"}';
rpm -ivh http://172.19.30.116/mirror/elk/filebeat/7.4.0/filebeat-7.4.0-x86_64.rpm
chkconfig filebeat on

修改filebeat的配置/etc/filebeat/filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/nginx_access.log
  tags: ["nginx-access"]
  document_type: json-nginxaccess
  tail_files: true
  
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:

output.redis:
   enabled: true
   hosts: ["192.168.1.1:7000"]    #这里的是自定义的REDIS服务器IP,redis端口默认是6379,请根据自己的情况修改
   port: 7000
   key: nginx
   db: 0
   datatype: list

现在我们反过来配置logstash

mkdir /srv/app/elk/logstash/config/conf.d
vim /srv/app/elk/logstash/config/conf.d/nginx-logs.conf

写入以下内容

input {
     redis {
         host => "192.168.1.1"
         port => "7000"
         key => "nginx"
         data_type => "list"
         threads => "5"
         db => "0"
    }
}

filter {
    json {
        source => "message"
        remove_field => ["beat"]
    }

    geoip {
               source => "clientip"
    }

    geoip {
        source => "clientip"
        target => "geoip"
        add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
        add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
    }

    grok {
                match => ["message","%{TIMESTAMP_ISO8601:isotime}"]
        }
    date {
        locale => "en"
        match => ["isotime","ISO8601"]
        target => "@timestamp"
    }
        mutate {
                convert => [ "[geoip][coordinates]", "float"]
                # remove_field => ["message"]
        }
}

output {
            elasticsearch {
                    hosts => ["127.0.0.1:9200"]
                    index => "logstash-nginx-logs-%{+YYYY.MM.dd}"
            }
}
/srv/app/elk/logstash/bin/logstash -f  /srv/app/elk/logstash/config/conf.d/nginx-logs.conf

后记

主要的配置如下:

server {
         listen       80;
         server_name  kibana;
         access_log   off;
         error_log    off;

         location / {
             auth_basic         "Kibana";
             auth_basic_user_file  /srv/app/tengine/conf/conf.d/passwd; 
             proxy_pass         http://127.0.0.1:5601;
             proxy_set_header   Host             $host;
             proxy_set_header   X-Real-IP        $remote_addr;
             proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
         }
}
#!/bin/bash
read -p "请输入用户名: " USERNAME
read -p "请指定用户密码: " PASSWD
printf "$USERNAME:$(openssl passwd -crypt $PASSWD)\n" >> passwd

原文:https://www.cnblogs.com/lizhaojun-ops/p/11962524.html

评论(0
© 2014 bubuko.com 版权所有 - 联系我们:wmxa8@hotmail.com
打开技术之扣,分享程序人生!