转载请备注来源: 《ELK、Filebeat日志分析平台搭建》 | shuwoom.com

在这篇文章中,我将介绍ELK这个集中式的日志分析系统,并介绍组成ELK的各个开源软件:ElasticSearch、Logstash、Kibana以及新的协议栈成员Filebeat。同时将带大家进行ELK搭建,并学习常见的ELK使用方法。

一、ELK介绍

1.ELK简介

如今,绝大部分系统都是一个分布式的环境,机器分布在不同的环境中。而如果我们需要去查看日志信息,按照以前的方式一台台登陆去查看,效率非常低,而且很耗时间。所以这里需要一个集中式的日志存储分析系统。而一个集中式的日志存储系统又以下几个特点:

  • 收集-能够采集多种来源的日志数据
  • 传输-能够稳定的把日志数据传输到中央系统
  • 存储-如何存储日志数据
  • 分析-可以支持 UI 分析
  • 警告-能够提供错误报告,监控机制

而目前市面上Splunk都满足上述特点,而且非常优秀,但是它是一款商业收费的软件,让很多人望而却步。而ELK的出现,弥补了开源集中式日志存储软件的空白。当然除了ELK,还有其他的很多开源日志存储软件,如:
FaceBook 公司的 Scribe,Apache 的 Chukwa,Linkedin 的 Kafak,Cloudera 的 Fluentd等等。

目前业界应用最多最广泛的,还是属ELK。国内的新浪、腾讯、华为、美团、饿了吗以及国外的IBM等公司都采用ELK。至于为什么要用ELK,我觉得这些大公司的广泛使用就能很好说明ELK的优秀了。

2.ELK协议栈介绍

这里需要说明的是,ELK不是一个软件,而是一套解决方案。ELK是ElasticSearch、Logstash和Kibana三个软件的缩写,它们通常是搭配使用,同时也可以搭配Filebeat一起来使用,其协议栈如下图所示:

这几个软件的关系如下图流程图所示:

elk、filebeat流程示意图

数据采集可以通过Filebeat等软件或者可以直接采用Logstash收集,然后数据发送给Logstash进行过滤后,再写入ElasticSearch,ElasticSearch对这些数据创建索引,最后由Kibana对其进行各种分析并以图表形式展示出来。

(1)ElasticSearch

ElasticSearch是一个基于Lucene的企业级开源搜索引擎,ElasticSearch使用Java开发,并使用Lucene作为其核心来实现所有索引和搜索功能, 它提供了一个分布式的全文搜索引擎。 主要特点如下:

  • 实时分析
  • 分布式文件存储,支持将每一个字段都编入索引
  • 文档导向,所有对象都是文档
  • 高可用性、易扩展、支持集群、分片和复制
  • 接口友好,支持RESTFUL API交互

常见集群结构如下:

elasticsearch集群结构

(2)Logstash

Logstash是一个具有实时渠道能力的数据收集引擎,使用Ruby语言编写,它能够同时从多个来源采集数据、转换数据并发送给存储引擎存储,如ElasticSearch。

logstash功能说明

它由三部分组成:

  • Shipper:发送日志数据
  • Broker:收集数据
  • Indexer:数据写入
logstash基本组成

(3)Kibana

Kibana使用JavaScript语言编写,主要为ElasticSearch提供分析和可视化的Web平台。它可以在ElasticSearch的索引中国查找、交互数据,并生成各种唯独的表图。如下图所示:

kibana数据展示

(4)Filebeat

Filebeat是ELK协议栈中的新成员,是一个轻量级的开源日志文件收集软件, 基于 Logstash-Forwarder 源代码开发,是对它的一个替代。

在需要采集日志的服务器上安装Filebeat,并制定要采集的日志目录,Filebeat就能读取数据并发送给Logstash进行过滤解析,或者可以直接发送给ElasticSearch等存储引擎进行集中式存储和分析。

下图是filebeat的工作流程,当开启filebeat服务后, 它会启动一个或多个探测器(prospectors)去检测你指定的日志目录或文件,对于探测器找出的每一个日志文件,filebeat启动收割进程(harvester),每一个收割进程读取一个日志文件的新内容,并发送这些新的日志数据到处理程序(spooler),处理程序会集合这些事件,最后filebeat会发送集合的数据到你指定的地点,如logstash或者Elasticsearch等。

filebeat流程示意图

二、ELK搭建

备注:下面的所有安装操作,都是在centos7.2 x86_64环境下进行。

1.安装Java

yum -y install java-1.8.0-openjdk-devel.x86_64

返回下面的输出,表示安装成功。

[root@VM_16_17_centos ~]# java -version
openjdk version "1.8.0_191"
OpenJDK Runtime Environment (build 1.8.0_191-b12)
OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode)

2.ElasticSearch安装

(1)下载安装ElasticSearch

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.rpm
rpm -ivh elasticsearch-6.2.4.rpm

(2)配置ElasticSearch

vim /etc/elasticsearch/elasticsearch.yml

取消如下配置的注释:

bootstrap.memory_lock: true
network.host: localhost
http.port: 9200

这里,elasticsearch监听的是本地的9200默认端口

vim /etc/sysconfig/elasticsearch

取消如下配置的注释

MAX_LOCKED_MEMORY=unlimited

(3)启动ElasticSearch并设置开机自启

systemctl daemon-reload
systemctl enable elasticsearch
systemctl start elasticsearch

通过下面这个命令,我们可以看到Elastic Search正在监听9200端口:

elasticsearch9200端口监听

我们也可以通过:curl localhost:9200 这个命令来查看ElasticSearch服务是否运行成功,运行正常则会返回如下输出:

{
  "name" : "Ibmm5BR",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "_azcjoJxR3Guci8DhOMhdA",
  "version" : {
    "number" : "6.2.4",
    "build_hash" : "ccec39f",
    "build_date" : "2018-04-12T20:37:28.497551Z",
    "build_snapshot" : false,
    "lucene_version" : "7.2.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

(4)配置ElasticSearch支持外网访问

上一步的配置,我们无法通过外网访问ElasticSearch服务。这是因为我们本地端口监听的是127.0.0.1,如果想要支持外网访问,需要改成监听:0.0.0.0的ip地址。

vim /etc/elasticsearch/elasticsearch.yml

改成:
network.host: 0.0.0.0

然后我们再次重启elasticsearch:

systemctl  restart  elasticsearch

这时候,我们再去查看9200端口监听,会发现已经没有监听9200端口了。同时curl localhost:9200也返回失败,显示:

curl: (7) Failed connect to localhost:9200; Connection refused

这是什么原因呢?我们查看elasticsearch的日志文件:

vim /var/log/elasticsearch/elasticsearch.log
[2019-01-19T23:17:02,110][WARN ][o.e.b.JNANatives         ] Unable to lock JVM Memory: error=12, reason=Cannot allocate memory
[2019-01-19T23:17:02,111][WARN ][o.e.b.JNANatives         ] This can result in part of the JVM being swapped out.
[2019-01-19T23:17:02,111][WARN ][o.e.b.JNANatives         ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2019-01-19T23:17:02,111][WARN ][o.e.b.JNANatives         ] These can be adjusted by modifying /etc/security/limits.conf, for example:
        # allow user 'elasticsearch' mlockall
        elasticsearch soft memlock unlimited
        elasticsearch hard memlock unlimited
[2019-01-19T23:17:02,111][WARN ][o.e.b.JNANatives         ] If you are logged in interactively, you will have to re-login for the new limits to take effect.
[2019-01-19T23:17:02,345][INFO ][o.e.n.Node               ] [] initializing ...
......
[2019-01-19T23:17:08,384][ERROR][o.e.b.Bootstrap          ] [Ibmm5BR] node validation exception
[1] bootstrap checks failed
[1]: memory locking requested for elasticsearch process but memory is not locked
[2019-01-19T23:17:08,409][INFO ][o.e.n.Node               ] [Ibmm5BR] stopping ...
[2019-01-19T23:17:08,501][INFO ][o.e.n.Node               ] [Ibmm5BR] stopped
[2019-01-19T23:17:08,501][INFO ][o.e.n.Node               ] [Ibmm5BR] closing ...
[2019-01-19T23:17:08,536][INFO ][o.e.n.Node               ] [Ibmm5BR] closed

我们看日志文件里面,有几个WARN和ERROR信息。为什么监听127.0.0.1本地地址可以运行正常,而监听0.0.0.0允许外部访问重启就报错了呢?

这是因为ElasticSearch配置成允许外部访问,则ElasticSearch会把机器当成生产环境看待,ElasticSerach就会强制检查,所以才会出现一些告警和报错信息。

下面我们来一个个去处理。

(5)ElasticSearch启动失败修复

在上面的错误日志中,我们看到了如下的提示。

......
[2019-01-19T23:17:02,111][WARN ][o.e.b.JNANatives         ] These can be adjusted by modifying /etc/security/limits.conf, for example:
        # allow user 'elasticsearch' mlockall
        elasticsearch soft memlock unlimited
        elasticsearch hard memlock unlimited
......

vim /etc/security/limits.conf

添加如下两行配置:

elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited

这里我们需要创建一个普通用户来启动ElasticSearch服务,否则会报如下的错误,ElasticSearch是不允许root用户启动的。

[root@bogon ~]# /usr/share/elasticsearch/bin/elasticsearch 
[2019-01-19T23:29:44,863][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125) ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-6.2.4.jar:6.2.4]
	at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.2.4.jar:6.2.4]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) ~[elasticsearch-6.2.4.jar:6.2.4]
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
	at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:105) ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-6.2.4.jar:6.2.4]
	... 6 more

我们可以通过如下命令创建elasticsearch用户专门用于启动ElasticSearch服务。

adduser elasticsearch  //新建elasticsearch用户
passwd elasticsearch  //给elasticsearch用户设置密码

然后我们切换到 elasticsearch 用户:su elasticsearch,这时候会报下面的错误:

This account is currently not available.

这时候需要运行shell修改用户:

usermod -s /bin/bash elasticsearch

再次切换就成功了。这时候我们再次启动就不会报不允许root登陆的错误,但是又出现了新的错误:

......
ERROR: [2] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
[2]: max number of threads [3798] for user [elasticsearch] is too low, increase to at least [4096]
......

第一个错误是因为linux会限制进程的最大打开文件数,

这里面我们使用elasticsearch来启动的,所以需要用root用户添加如下配置:

vim /etc/security/limits.conf

添加的配置如下:

icsearch - nofile 65536

* soft nproc 2048
* hard nproc 4096

再次启动后,会输出如下的信息,说明启动成功。

如果想要后台运行ElasticSearch,只需要加 “-d”参数启动即可。

......
[2019-01-19T23:59:03,476][INFO ][o.e.n.Node               ] initialized
[2019-01-19T23:59:03,477][INFO ][o.e.n.Node               ] [Ibmm5BR] starting ...
[2019-01-19T23:59:04,715][INFO ][o.e.t.TransportService   ] [Ibmm5BR] publish_address {192.168.88.128:9300}, bound_addresses {[::]:9300}
[2019-01-19T23:59:04,726][INFO ][o.e.b.BootstrapChecks    ] [Ibmm5BR] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-01-19T23:59:07,881][INFO ][o.e.c.s.MasterService    ] [Ibmm5BR] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {Ibmm5BR}{Ibmm5BRcQkWYM7ce6ovIyQ}{H00dVdtzQHSYE3VZF-NdGg}{192.168.88.128}{192.168.88.128:9300}
[2019-01-19T23:59:07,885][INFO ][o.e.c.s.ClusterApplierService] [Ibmm5BR] new_master {Ibmm5BR}{Ibmm5BRcQkWYM7ce6ovIyQ}{H00dVdtzQHSYE3VZF-NdGg}{192.168.88.128}{192.168.88.128:9300}, reason: apply cluster state (from master [master {Ibmm5BR}{Ibmm5BRcQkWYM7ce6ovIyQ}{H00dVdtzQHSYE3VZF-NdGg}{192.168.88.128}{192.168.88.128:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2019-01-19T23:59:07,998][INFO ][o.e.h.n.Netty4HttpServerTransport] [Ibmm5BR] publish_address {192.168.88.128:9200}, bound_addresses {[::]:9200}
[2019-01-19T23:59:07,998][INFO ][o.e.n.Node               ] [Ibmm5BR] started
[2019-01-19T23:59:08,000][INFO ][o.e.g.GatewayService     ] [Ibmm5BR] recovered [0] indices into cluster_state

现在我们看到9200监听的就是所有ip地址,这时候我们就可以通过外网去访问这个ElasticSearch服务。

esktop]# netstat -anp|grep 9200
tcp6       0      0 :::9200                 :::*                    LISTEN      18037/java          
tcp6       0      0 ::1:48854               ::1:9200                TIME_WAIT   -    

备注:这里不建议配置成0.0.0.0监听地址,这容易造成安全问题,最好的办法是监听内网ip,然后通过iptables来限制外网ip访问,同时也可以限制访问的机器。

3.Kibana安装

wget https://artifacts.elastic.co/downloads/kibana/kibana-6.2.4-x86_64.rpm
rpm -ivh kibana-6.2.4-x86_64.rpm

配置Kibana

vim /etc/kibana/kibana.yml

取消以下注释:

server.port: 5601
server.host: "localhost"
elasticsearch.url: "http://localhost:9200"

备注:如果像支持外网访问,可以将server.host改成”0.0.0.0″

启动Kibana:

systemctl enable kibana
systemctl start kibana

返回如下信息表明Kibana安装成功。

[root@bogon Desktop]# curl localhost:5601
<script>var hashRoute = '/app/kibana';
var defaultRoute = '/app/kibana';

var hash = window.location.hash;
if (hash.length) {
  window.location = hashRoute + hash;
} else {
  window.location = defaultRoute;
}</script>

这时候在浏览器中访问:http://localhost:5601,就可以看到如下的管理页面:

kibana主页截图

4.Logstash安装

wget https://artifacts.elastic.co/downloads/logstash/logstash-6.2.4.rpm
rpm -ivh logstash-6.2.4.rpm

启动:

systemctl restart logstash
systemctl enable logstash

也可以通过以下方式启动:

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-filebeat-nginx.conf

备注:配置logstash文件,我会在后面专门介绍实际的案例讲解。

5.FileBeat安装

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.4-x86_64.rpm
rpm -ivh filebeat-6.2.4-x86_64.rpm

设置开机启动以及启动系统服务:

systemctl daemon-reload
systemctl enable filebeat

到这里,我们已经把ELK+FileBeat的一套东西都安装完毕了,接下来,我们会介绍一个实际的使用案例,加深大家对ELK的使用。

三、ELK、FileBeat实战

1. nginx日志收集

这里我们采用的方案是,LogStash采集日志,ElasticSearch存储数据,Kibana展示数据。nginx服务器产生的日志文件作为u数据源。

ELK架构

首先修改nginx配置,将nginx日志输出转换成json格式(也可以不修改,这里主要是为了方便后期的日志分析),将/etc/nginx/nginx.conf文件中的log_format改成如下:

    log_format access_json '{"@timestamp":"$time_iso8601",'
        '"host":"$server_addr",'
        '"clientip":"$remote_addr",'
        '"size":"$body_bytes_sent",'
        '"responsetime":"$request_time",'
        '"user_agent":"$http_user_agent",'
        '"request":"$request",'
        '"uri":"$uri",'
        '"domain":"$host",'
        '"xff":"$http_x_forwarded_for",'
        '"referer":"$http_referer",'
        '"status":"$status"}';
    access_log  /var/log/nginx/access.log  access_json;

重新访问nginx服务器首页,此时打开/var/log/nginx/access.log文件后生成的日志就变成json格式的:

{"@timestamp":"2019-01-22T13:48:13-08:00","host":"::1","clientip":"::1","size":"0","responsetime":"0.000","user_agent":"Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0","request":"GET /nginx-logo.png HTTP/1.1","uri":"/nginx-logo.png","domain":"localhost","xff":"-","referer":"http://localhost/","status":"304"}
{"@timestamp":"2019-01-22T13:48:13-08:00","host":"::1","clientip":"::1","size":"0","responsetime":"0.000","user_agent":"Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0","request":"GET /poweredby.png HTTP/1.1","uri":"/poweredby.png","domain":"localhost","xff":"-","referer":"http://localhost/","status":"304"}

配置logstash收集nginx访问日志:

vim /etc/logstash/conf.d/nginx.conf

input {
  file {
    path => "/var/log/nginx/access.log"
    start_position => "end"
    codec => "json"
    type => "nginx-accesslog"
  }
}

filter {}

output {
  if [type] == "nginx-accesslog" {
    elasticsearch {
      hosts => ["localhost:9200"]
      index => "nginx-accesslog-%{+YYYY.MM.dd}"
    }
  }
}

验证配置文件是否正确:

/usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf --config.test_and_exit  

输出如下内容表示配置文件正确:

Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK

如果想在终端测试看实际记录的日志,可以在/etc/logstash/conf.d/nginx.conf配置文件中增加stdout {}。

input {
  file {
    path => "/var/log/nginx/access.log"
    start_position => "end"
    codec => "json"
    type => "nginx-accesslog"
  }
}

filter {}

output {
  stdout{}
  if [type] == "nginx-accesslog" {
    elasticsearch {
      hosts => ["localhost:9200"]
      index => "nginx-accesslog-%{+YYYY.MM.dd}"
    }
  }
}

然后用下面的命令启动:
/usr/share/logstash/bin/logstash –path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf

访问nginx服务器,看到下面的json输出就表示运行正常。

[root@bogon bin]#  /usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
{
            "host" => "::1",
            "size" => "0",
    "responsetime" => "0.000",
      "@timestamp" => 2019-01-22T21:54:56.000Z,
         "referer" => "-",
             "uri" => "/index.html",
          "domain" => "localhost",
        "@version" => "1",
             "xff" => "-",
        "clientip" => "::1",
      "user_agent" => "Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0",
            "path" => "/var/log/nginx/access.log",
          "status" => "304",
         "request" => "GET / HTTP/1.1",
            "type" => "nginx-accesslog"
}
{
            "host" => "::1",
            "size" => "0",
    "responsetime" => "0.000",
      "@timestamp" => 2019-01-22T21:54:57.000Z,
         "referer" => "http://localhost/",
             "uri" => "/nginx-logo.png",
          "domain" => "localhost",
        "@version" => "1",
             "xff" => "-",
        "clientip" => "::1",
      "user_agent" => "Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0",
            "path" => "/var/log/nginx/access.log",
          "status" => "304",
         "request" => "GET /nginx-logo.png HTTP/1.1",
            "type" => "nginx-accesslog"
}

这里也可以直接用:systemctl restart logstash命令重启

接下来通过访问http://localhost:9200/_cat/indices?v,我们可以看到索引文件也成功生成了:

[root@bogon Desktop]# curl http://localhost:9200/_cat/indices?v
health status index                      uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   nginx-accesslog-2019.01.22 nXAZ4gHZT-uB4USTDRB_YA   5   1          9            0     71.9kb         71.9kb
[root@bogon Desktop]#

最后一步,我们就可以在kibana上配置索引查看日志文件

访问 localhost:5601:

kibana首页

到这里,我们就把一个简单的日志采集系统给搭建起来了。如果想要在logstash里记录多个日志源,例如想要采集nginx的错误日志,配置文件可以修改成如下,并按照上面的步骤配置即可。

input {
  file {
    path => "/var/log/nginx/access.log"
    start_position => "end"
    codec => "json"
    type => "nginx-accesslog"
  }

  file {
    path => "/var/log/nginx/error.log"
    start_position => "end"
    codec => "json"
    type => "nginx-errorlog"
  }
}

filter {}

output {
  if [type] == "nginx-accesslog" {
    elasticsearch {
      hosts => ["localhost:9200"]
      index => "nginx-accesslog-%{+YYYY.MM.dd}"
    }
  }

  if [type] == "nginx-errorlog" {
    elasticsearch {
      hosts => ["localhost:9200"]
      index => "nginx-errorlog-%{+YYYY.MM.dd}"
    }
  }

}

这里type字段是用来区分日志文件类型

2.使用FileBeat采集,Logstash过滤,ES存储,Kibana展示

先停止Logstash和Filebeat

systemctl stop logstash
systemctl stop filebeat

删除上面产生的全部索引日志数据:

curl -XDELETE http://localhost:9200/_all

编辑Filebeat配置文件(注释掉output.elasticsearch):
vim /etc/filebeat/filebeat.yml

#output.elasticsearch:
  # Array of hosts to connect to.
#  hosts: ["localhost:9200"]
filebeat.prospectors:
- type: log
  enable: true
  paths:
    - /var/log/nginx/access.log
  fields:
      service: filebeat-nginx-accesslog
  scan_frequency: 10s

- type: log
  enable: true
  paths:
    - /var/log/nginx/error.log
  fields:
      service: filebeat-nginx-errorlog
  scan_frequency: 10s

output.logstash:
  hosts: ["localhost:10515"]

这里fields、service字段是我们在logstash中用来区分日志类型的(见下面logstash的配置)。

配置Logstash配置文件
vim /etc/logstash/conf.d/logstash-filebeat-nginx.conf

input {
  beats {
    port => 10515
    client_inactivity_timeout=>"1200"
  }
}

filter {}

output {
  if [fields][service] == "filebeat-nginx-accesslog" {
    elasticsearch {
      hosts => ["localhost:9200"]
      index => "nginx-accesslog-%{+YYYY.MM.dd}"
    }
  }

  if [fields][service] == "filebeat-nginx-errorlog" {
    elasticsearch {
      hosts => ["localhost:9200"]
      index => "nginx-errorlog-%{+YYYY.MM.dd}"
    }
  }
}

至此,我们就成功配置好filebeat和logstash文件。

接下来重启logstash和filebeat:

systemctl restart logstash
systemctl restart filebeat

当然,如果想要观察logstash和filebeat的输出(配置文件中需要配上stdout{}输出),也可以用命令行手动启动:

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-filebeat-nginx.conf
/usr/bin/filebeat -e -c /etc/filebeat/filebeat.yml 

现在,让我们查看filebeat是否收集正常(备注:需要先多访问机器nginx服务器产生日志)

[root@bogon Desktop]# curl http://localhost:9200/_cat/indices?v
health status index                      uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   nginx-accesslog-2019.01.26 qak_Mi16RAGr1vo4ZVyW9g   5   1          9            0     60.7kb         60.7kb

看到上图中索引文件生成后,我们就可以按照上面我们得操作来配置kibana查看日志。

4.使用FileBeat采集,ES存储,Kibana展示

在上面的方案中,我们使用logstash主要是用于过滤。logstash本身是比较消耗资源的,如果只是简单的采集日志而没有过滤操作,可以不适用logstash,采用Filebeat采集日志,然后直接发送给ElasticSearch,最后在Kibana展示这个方案。Filebet本身消耗的资源也比较小,比较推荐使用Filebeat来专门采集日志。

这个方案只需要配置filebeat文件即可:

filebeat.prospectors:
- type: log
  enable: true
  paths:
    - /var/log/nginx/access.log
  fields:
      service: filebeat-nginx-accesslog
  scan_frequency: 10s

- type: log
  enable: true
  paths:
    - /var/log/nginx/error.log
  fields:
      service: filebeat-nginx-errorlog
  scan_frequency: 10s

setup.template.name: "index-%{[beat.version]}"
setup.template.pattern: "index-%{[beat.version]}"

output.elasticsearch:
  hosts: ["localhost:9200"]
  index: "index-%{[beat.version]}-%{[fields.service]:other}-%{+yyyy.MM.dd}"

这里,我们通过fields.service来区分access.log和error.log文件

我们还是先把elasticsearch产生的索引文件清空:

curl -XDELETE http://localhost:9200/_all

停止Logstash,然后重启filebeta服务,同样还是多访问几次nginx产生日志。

通过curl命令,我们可以看到索引文件成功生成(备注:这里生成索引文件需要等一会,不是马上生成):

[root@bogon Desktop]# curl http://localhost:9200/_cat/indices?v
health status index                        uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   index-6.5.4-other-2019.01.25 wKM4QEWaRN63bJz-VjZrJQ   5   1          6            0     67.4kb         67.4kb

至此ELK、FileBeat的搭建就完成了。当然这篇文章还只是ELK的入门,只介绍了ELK的基本功能和环境搭建,ELK还有很多高级复杂的功能,如果像进一步学习,估计得一本书的内容,大家刚兴趣的可以自己查看手册学习。

转载请备注来源: 《ELK、Filebeat日志分析平台搭建》 | shuwoom.com

可以关注我的公众号,第一时间推送最新的博客文章。

shuwoom的博客公众号
打赏

发表评论

电子邮件地址不会被公开。