26.1 ELK入门

在日常运维工作中,对于系统和业务日志的处理尤为重要。作为运维工程师,假如管理的服务器数量并不是很多,那么不需要借助任何工具也能管理过来。但如果服务器数量非常多,而因为业务的关系,运维和开发人员常常需要通过查看日志来定位问题,很明显不借助任何工具来帮助我们管理日志是不现实的,不仅让人感觉繁琐,效率也会低下。

ELK介绍

需求背景:

1. 业务发展越来越庞大,服务器越来越多;

2. 各种访问日志、应用日志及错误日志的量也越来越多;

3. 开发人员排查问题,需要到服务器上查看日志分析解决问题,不方便;

4. 运营人员需要一些数据,需要我们运维到服务器上分析日志,不方便。

概念:

ELK 是 elastic 公司旗下三款产品ElasticSearch、Logstash、Kibana的首字母组合,也即Elastic Stack包含ElasticSearch、Logstash、Kibana、Beats

ElasticSearch是一个搜索引擎,用来搜索、分析、存储日志。它是分布式的,可以横向扩容,可以自动发现,索引自动分片

Logstash用来采集日志,把日志解析为json格式交给ElasticSerrch

Kibana是一个数据可视化组件,把处理后的结果通过web界面展示

Beats是一个轻量型日志采集器

X-Pack对Elastic Stack提供了安全、警报、监控、报表、图表于一身的扩展包,是收费的

为什么要使用ELK:

一般我们需要进行日志分析场景:直接在日志文件中 grep、awk 就可以获得自己想要的信息。但在规模较大也就是日志量多而复杂的场景中,
此方法效率低下,面临问题包括日志量太大如何归档、文本搜索太慢怎么办、如何多维度查询。需要集中化的日志管理,所有服务器上的日志
收集汇总。常见解决思路是建立集中式日志收集系统,将所有节点上的日志统一收集,管理,访问。

大型系统通常都是一个分布式部署的架构,不同的服务模块部署在不同的服务器上,问题出现时,大部分情况需要根据问题暴露的关键信息,定位到具体的服务器和服务模块,构建一套集中式日志系统,可以提高定位问题的效率。

一个完整的集中式日志系统,需要包含以下几个主要特点:

收集-能够采集多种来源的日志数据
传输-能够稳定的把日志数据传输到中央系统
存储-如何存储日志数据
分析-可以支持 UI 分析
警告-能够提供错误报告,监控机制

而ELK则提供了一整套解决方案,并且都是开源软件,之间互相配合使用,完美衔接,高效的满足了很多场合的应用。是目前主流的一种日志系统。

ELK架构:

ELK入门-LMLPHP

上面是 ELK 技术栈的一个架构图,从图中可以清楚的看到数据流向:

Beats是单一用途的数据传输平台,它可以将多台机器的数据发送到 Logstash 或 ElasticSearch。但 Beats 并不是不可或缺的一环,所以本文中暂不介绍

Logstash是一个动态数据收集管道,支持以 TCP/UDP/HTTP 多种方式收集数据(也可以接受 Beats 传输来的数据),并对数据做进一步丰富或提取字段处理

ElasticSearch是一个基于 JSON 的分布式的搜索和分析引擎,作为 ELK 的核心,它集中存储数据

Kibana是 ELK 的用户界面,它将收集的数据进行可视化展示(各种报表、图形化数据),并提供配置、管理 ELK 的界面。

ELK安装准备

官网:https://www.elastic.co/cn/ ,中文文档:https://elkguide.elasticsearch.cn/

环境准备:

3台机器:lzx:192.168.100.150 ,lzx1:192.168.100.160 ,lzx2:192.168.100.170

角色划分:

3台机器都安装elasticSearch(简称es),1个主节点为lzx,2个数据节点分别是lzx1和lzx2

es主节点lzx上安装kibana,其中1台es数据节点lzx1上安装logstash,另外一台lzx2上安装beats

3台机器都要安装jdk8(openjdk也可以)
  • 编辑hosts:

lzx

[root@lzx ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.150 lzx
192.168.100.160 lzx1
192.168.100.170 lzx2

lzx1

[root@lzx1 ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.150 lzx
192.168.100.160 lzx1
192.168.100.170 lzx2

lzx2

[root@lzx2 ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.150 lzx
192.168.100.160 lzx1
192.168.100.170 lzx2
  • 安装openjdk:

lzx

[root@lzx ~]# yum install -y java-1.8.0-openjdk
[root@lzx ~]# which java
/usr/bin/java

lzx1

[root@lzx1 ~]# yum install -y java-1.8.0-openjdk
[root@lzx1 ~]# which java
/usr/bin/java

lzx2

[root@lzx2 ~]# yum install -y java-1.8.0-openjdk
[root@lzx2 ~]# which java
/usr/bin/java

es安装

官网文档:https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html

  • 三台机器都安装es:

lzx

[root@lzx ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch        //--import,导入密钥
[root@lzx ~]# vim /etc/yum.repos.d/elastic.repo       //写入下面内容
[elasticsearch-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
[root@lzx ~]# yum install -y elasticsearch

也可以使用下载rpm包安装

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.0.0.rpm
rpm -ivh elasticsearch-6.0.0.rpm

lzx1

[root@lzx1 ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[root@lzx1 ~]# vim /etc/yum.repos.d/elastic.repo      //写入下面内容
[elasticsearch-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
[root@lzx1 ~]# yum install -y elasticsearch

lzx2

[root@lzx2 ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[root@lzx2 ~]# vim /etc/yum.repos.d/elastic.repo      //写入下面内容
[elasticsearch-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
[root@lzx2 ~]# yum install -y elasticsearch

es配置

elasticsearch有两个配置文件:/etc/elasticsearch 和 /etc/sysconfig/elasticsearch ,配置集群我们需要配置/etc/elasticsearch这个配置文件。

  • 三台机器都要配置:

lzx

[root@lzx ~]# vim /etc/elasticsearch/elasticsearch.yml      //增加下面内容,注意空格
cluster.name: lzxlinux      //Cluster里面添加该行,定义集群名
node.name: lzx        //指定节点主机名,在Node中添加该行
node.master: true       //表示是否为主节点,在Node中添加该行
node.data: false        //表示是否是数据节点,在Node中添加该行
network.host: 192.168.100.150       //在Network中添加该行,监听ip
discovery.zen.ping.unicast.hosts: ["192.168.100.150","192.168.100.160","192.168.100.170"]      //在Discovery中添加该行,定义集群中那些角色,可以写ip地址,也可以写主机名

lzx1

[root@lzx1 ~]# vim /etc/elasticsearch/elasticsearch.yml      //与上面相同位置增加下面内容,注意空格
cluster.name: lzxlinux
node.name: lzx1
node.master: false       //表示不是主节点
node.data: true        //表示是数据节点
network.host: 192.168.100.160
discovery.zen.ping.unicast.hosts: ["192.168.100.150","192.168.100.160","192.168.100.170"]

lzx2

[root@lzx2 ~]# vim /etc/elasticsearch/elasticsearch.yml      //与上面相同位置增加下面内容,注意空格
cluster.name: lzxlinux
node.name: lzx2
node.master: false       //表示不是主节点
node.data: true        //表示是数据节点
network.host: 192.168.100.170
discovery.zen.ping.unicast.hosts: ["192.168.100.150","192.168.100.160","192.168.100.170"]
  • 启动es:

lzx

[root@lzx ~]# systemctl start elasticsearch        //启动elasticsearch
[root@lzx ~]# ps aux |grep elastic
elastic+   1305 23.2 70.5 3146348 704912 ?      Ssl  02:02   0:13 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.hTsSRA4X -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/var/log/elasticsearch/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=rpm -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
[root@lzx ~]# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      793/nginx: master p
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      751/sshd
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      857/master
tcp6       0      0 192.168.100.150:9200    :::*                    LISTEN      1305/java
tcp6       0      0 192.168.100.150:9300    :::*                    LISTEN      1305/java      //已经在监听9200和9300端口
tcp6       0      0 :::22                   :::*                    LISTEN      751/sshd
tcp6       0      0 ::1:25                  :::*                    LISTEN      857/master
[root@lzx ~]# ls /var/log/elasticsearch/        //已经生成日志
gc.log.0.current     lzxlinux_deprecation.log             lzxlinux_index_search_slowlog.log
lzxlinux_access.log  lzxlinux_index_indexing_slowlog.log  lzxlinux.log

lzx1

[root@lzx1 ~]# systemctl start elasticsearch        //启动elasticsearch
[root@lzx1 ~]# ps aux |grep elastic
elastic+   1306 20.2 72.8 3160640 728548 ?      Ssl  02:15   0:11 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.Jc7kukF8 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/var/log/elasticsearch/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=rpm -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
elastic+   1354  0.0  0.3  63940  3236 ?        Sl   02:15   0:00 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller
root       1360  0.0  0.0 112704   972 pts/0    R+   02:16   0:00 grep --color=auto elastic
[root@lzx1 ~]# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      804/nginx: master p
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      758/sshd
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      860/master
tcp6       0      0 192.168.100.160:9200    :::*                    LISTEN      1306/java
tcp6       0      0 192.168.100.160:9300    :::*                    LISTEN      1306/java      //已经在监听9200和9300端口
tcp6       0      0 :::22                   :::*                    LISTEN      758/sshd
tcp6       0      0 ::1:25                  :::*                    LISTEN      860/master

lzx2

[root@lzx2 ~]# systemctl start elasticsearch       //启动elasticsearch
[root@lzx2 ~]# ps aux |grep elastic
elastic+   1239 17.5 70.4 3149208 703828 ?      Ssl  02:15   0:08 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.59DsLiB9 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/var/log/elasticsearch/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=rpm -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
root       1288  0.0  0.0 112704   976 pts/0    S+   02:16   0:00 grep --color=auto elastic
[root@lzx2 ~]# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      740/sshd
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      835/master
tcp6       0      0 192.168.100.170:9200    :::*                    LISTEN      1239/java
tcp6       0      0 192.168.100.170:9300    :::*                    LISTEN      1239/java      //已经在监听9200和9300端口
tcp6       0      0 :::22                   :::*                    LISTEN      740/sshd
tcp6       0      0 ::1:25                  :::*                    LISTEN      835/master
  • curl查看es:

lzx上执行

先关闭3台机器的防火墙和selinux

[root@lzx ~]# curl '192.168.100.150:9200/_cluster/health?pretty'     //集群健康检查
{
  "cluster_name" : "lzxlinux",
  "status" : "green",         //status是green就说明集群没问题,如果是yellow或red都说明有问题
  "timed_out" : false,
  "number_of_nodes" : 3,       //3个节点
  "number_of_data_nodes" : 2,      //2个数据节点
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

还可以查看集群详细信息:

[root@lzx ~]# curl '192.168.100.150:9200/_cluster/state?pretty'        //查看集群详细信息

安装kibana

前面讲到过,kibana是一个数据可视化组件,把处理后的结果通过web界面展示,我们需要在主节点机器上安装它。

  • 下载安装:

lzx上执行

前面已经配置过yum源,这里不用再配置,如果没有配置这里需要配置

[root@lzx ~]# yum install -y kibana
  • 修改配置文件:
[root@lzx ~]# vim /etc/kibana/kibana.yml        //修改下面内容
#server.port: 5601      改为     server.port: 5601         //去掉前面#
#server.host: "localhost"       改为      server.host: 192.168.100.150         //定义为主节点IP
#elasticsearch.url: "http://localhost:9200"       改为       elasticsearch.url: "http://192.168.100.150:9200"       //URL定义为主节点IP+端口
#logging.dest: stdout       改为       logging.dest: /var/log/kibana.log       //定义kibana日志存储路径
  • 启动服务:
[root@lzx ~]# touch /var/log/kibana.log ; chmod 777   /var/log/kibana.log
[root@lzx ~]# systemctl start kibana        //启动kibana服务
[root@lzx ~]# ps aux |grep kibana
kibana     1062 18.9 18.2 1198292 182080 ?      Dsl  21:37   0:08 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
[root@lzx ~]# netstat -lntp |grep node         //kibana不再是java了,它是由node.js开发的
tcp        0      0 192.168.100.150:5601    0.0.0.0:*               LISTEN      1062/node         //已经在监听5601端口
  • 访问web界面:

在浏览器输入192.168.150:5601,访问web界面,因为没有安装x-pack,所以没有用户验证

ELK入门-LMLPHP

安装logstash

除了kibana之外,我们还需要安装logstash,按照之前的角色划分,这次在lzx1上操作。

  • 下载安装:

lzx1上执行

前面已经配置过yum源,这里不用再配置,如果没有配置这里需要配置

[root@lzx1 ~]# yum install -y logstash
  • 编辑配置文件:
[root@lzx1 ~]# vim /etc/logstash/conf.d/syslog.conf          //写入下面内容
input {
  syslog {
    type => "system-syslog"       //定义日志类型
    port => 10514         //定义监听端口
  }
}               //input部分定义日志源
output {
  stdout {
    codec => rubydebug         //表示将输出在当前屏幕显示出来
  }
}              //output部分定义输出位置
  • 检查配置文件是否有错:
[root@lzx1 ~]# cd /usr/share/logstash/bin/
[root@lzx1 bin]# ls
benchmark.sh         logstash               logstash.lib.sh      pqrepair
cpdump               logstash.bat           logstash-plugin      ruby
dependencies-report  logstash-keystore      logstash-plugin.bat  setup.bat
ingest-convert.sh    logstash-keystore.bat  pqcheck              system-install
[root@lzx1 bin]# ./logstash --path.settings /etc/logstash -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit      //检查配置文件是否出错。--path.settings指定配置文件所在目录;-f指定具体要检查的配置文件;--config.test_and_exit表示检查配置文件且检查完退出
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2018-09-30T22:32:30,002][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"}
[2018-09-30T22:32:30,030][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/var/lib/logstash/dead_letter_queue"}
[2018-09-30T22:32:34,201][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK            //有显示配置OK就说明刚刚的配置文件没问题
[2018-09-30T22:32:56,973][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

将系统日志输出到10514端口

[root@lzx1 bin]# vim /etc/rsyslog.conf         //配置系统日志文件,在#### RULES ####下面增加一行
*.* @@192.168.100.160:10514           //*.* 表示所有类型的日志;将所有日志都输出到192.168.100.160的10514端口
  • 启动服务:
[root@lzx1 bin]# systemctl restart rsyslog         //重启rsyslog服务,使配置文件生效
[root@lzx1 bin]# ./logstash --path.settings /etc/logstash -f /etc/logstash/conf.d/syslog.conf       //启动logstash服务
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2018-09-30T22:49:54,963][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-09-30T22:49:55,111][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"4923c0c7-3e8c-47d1-a484-e66a164e0d3d", :path=>"/var/lib/logstash/uuid"}
[2018-09-30T22:50:04,030][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.1"}
[2018-09-30T22:50:21,698][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-09-30T22:50:26,644][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x68440b08 run>"}
[2018-09-30T22:50:26,718][INFO ][logstash.inputs.syslog   ] Starting syslog tcp listener {:address=>"0.0.0.0:10514"}
[2018-09-30T22:50:26,752][INFO ][logstash.inputs.syslog   ] Starting syslog udp listener {:address=>"0.0.0.0:10514"}
[2018-09-30T22:50:26,887][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-09-30T22:50:30,553][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2018-09-30T22:57:42,621][INFO ][logstash.inputs.syslog   ] new connection {:client=>"192.168.100.160:57566"}
{
    "facility_label" => "syslogd",
           "message" => "[origin software=\"rsyslogd\" swVersion=\"8.24.0\" x-pid=\"1296\" x-info=\"http://www.rsyslog.com\"] exiting on signal 15.\n",
    "severity_label" => "Informational",
           "program" => "rsyslogd",
         "timestamp" => "Sep 30 22:57:40",
          "@version" => "1",
          "facility" => 5,
        "@timestamp" => 2018-10-01T02:57:40.000Z,
          "priority" => 46,
              "host" => "192.168.100.160",
         "logsource" => "lzx1",
              "type" => "system-syslog",
          "severity" => 6
}
{
    "facility_label" => "system",
           "message" => "Stopping System Logging Service...\n",
    "severity_label" => "Informational",
           "program" => "systemd",
         "timestamp" => "Sep 30 22:57:40",
          "@version" => "1",
          "facility" => 3,
        "@timestamp" => 2018-10-01T02:57:40.000Z,
          "priority" => 30,
              "host" => "192.168.100.160",
         "logsource" => "lzx1",
              "type" => "system-syslog",
          "severity" => 6
}
{
    "facility_label" => "system",
           "message" => "Starting System Logging Service...\n",
    "severity_label" => "Informational",
           "program" => "systemd",
         "timestamp" => "Sep 30 22:57:41",
          "@version" => "1",
          "facility" => 3,
        "@timestamp" => 2018-10-01T02:57:41.000Z,
          "priority" => 30,
              "host" => "192.168.100.160",
         "logsource" => "lzx1",
              "type" => "system-syslog",
          "severity" => 6
}
{
    "facility_label" => "syslogd",
           "message" => "[origin software=\"rsyslogd\" swVersion=\"8.24.0\" x-pid=\"1329\" x-info=\"http://www.rsyslog.com\"] start\n",
    "severity_label" => "Informational",
           "program" => "rsyslogd",
         "timestamp" => "Sep 30 22:57:42",
          "@version" => "1",
          "facility" => 5,
        "@timestamp" => 2018-10-01T02:57:42.000Z,
          "priority" => 46,
              "host" => "192.168.100.160",
         "logsource" => "lzx1",
              "type" => "system-syslog",
          "severity" => 6
}
{
    "facility_label" => "security/authorization",
           "message" => "Unregistered Authentication Agent for unix-process:1321:674979 (system bus name :1.24, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)\n",
    "severity_label" => "Notice",
           "program" => "polkitd",
         "timestamp" => "Sep 30 22:57:42",
          "@version" => "1",
          "facility" => 10,
               "pid" => "498",
        "@timestamp" => 2018-10-01T02:57:42.000Z,
          "priority" => 85,
              "host" => "192.168.100.160",
         "logsource" => "lzx1",
              "type" => "system-syslog",
          "severity" => 5
}
{
    "facility_label" => "system",
           "message" => "Started System Logging Service.\n",
    "severity_label" => "Informational",
           "program" => "systemd",
         "timestamp" => "Sep 30 22:57:42",
          "@version" => "1",
          "facility" => 3,
        "@timestamp" => 2018-10-01T02:57:42.000Z,
          "priority" => 30,
              "host" => "192.168.100.160",
         "logsource" => "lzx1",
              "type" => "system-syslog",
          "severity" => 6
}         //这些就是收集的系统日志

使用xshell复制一个ssh渠道,查看端口是否启动

[root@lzx1 ~]# netstat  -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 192.168.100.160:27019   0.0.0.0:*               LISTEN      884/mongod
tcp        0      0 127.0.0.1:27019         0.0.0.0:*               LISTEN      884/mongod
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      797/nginx: master p
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      756/sshd
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      871/master
tcp6       0      0 192.168.100.160:9200    :::*                    LISTEN      754/java
tcp6       0      0 :::10514                :::*                    LISTEN      1243/java          //已经在监听10514端口
tcp6       0      0 192.168.100.160:9300    :::*                    LISTEN      754/java
tcp6       0      0 :::22                   :::*                    LISTEN      756/sshd
tcp6       0      0 ::1:25                  :::*                    LISTEN      871/master
tcp6       0      0 127.0.0.1:9600          :::*                    LISTEN      1243/java

配置logstahsh

上面只是将系统日志输出在当前屏幕上,还没有弄到es中,所以还需要配置logstash。

  • 修改配置文件:
[root@lzx1 bin]# vim /etc/logstash/conf.d/syslog.conf        //修改为下面内容
input {
  syslog {
    type => "system-syslog"
    port => 10514
  }
}
output {
  elasticsearch {
    hosts => ["192.168.100.150:9200"]         //指向主节点的9200端口,这里写lzx1或lzx2的IP+端口也可以,因为是分布式的
    index => "system-syslog-%{+YYYY.MM}"         //定义索引
  }
}
  • 检查配置文件:
[root@lzx1 bin]# ./logstash --path.settings /etc/logstash -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2018-09-30T23:23:53,156][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK         //配置没问题
[2018-09-30T23:23:56,897][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
  • 后台启动服务:
[root@lzx1 bin]# chown logstash /var/log/logstash/logstash-plain.log       //让logstash服务启动后可以写入日志,不修改属主是没法写入日志的
[root@lzx1 bin]# chown -R logstash logstash /var/lib/logstash/       //这一步很重要,因为上面我们测试时用的root身份启动logstash服务,所以生成的配置文件的属主属组是root,需要改回来,否则下面启动服务会有问题
[root@lzx1 bin]# systemctl start logstash
[root@lzx1 bin]# tail /var/log/logstash/logstash-plain.log       //查看日志,有下面提示才算成功,这里我卡了一段时间,需要去多次尝试
[2018-09-30T23:59:15,135][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[2018-09-30T23:59:15,935][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x448b8162 run>"}
[2018-09-30T23:59:16,035][INFO ][logstash.inputs.syslog   ] Starting syslog tcp listener {:address=>"0.0.0.0:10514"}
[2018-09-30T23:59:16,053][INFO ][logstash.inputs.syslog   ] Starting syslog udp listener {:address=>"0.0.0.0:10514"}
[2018-09-30T23:59:16,079][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-09-30T23:59:16,818][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2018-09-30T23:59:23,411][INFO ][logstash.pipeline        ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x448b8162 run>"}
[root@lzx1 bin]# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 192.168.100.160:27019   0.0.0.0:*               LISTEN      884/mongod
tcp        0      0 127.0.0.1:27019         0.0.0.0:*               LISTEN      884/mongod
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      797/nginx: master p
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      756/sshd
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      871/master
tcp6       0      0 192.168.100.160:9200    :::*                    LISTEN      754/java
tcp6       0      0 :::10514                :::*                    LISTEN      2858/java
tcp6       0      0 192.168.100.160:9300    :::*                    LISTEN      754/java
tcp6       0      0 :::22                   :::*                    LISTEN      756/sshd
tcp6       0      0 ::1:25                  :::*                    LISTEN      871/master
tcp6       0      0 127.0.0.1:9600          :::*                    LISTEN      2858/java         //已经在监听9600端口

但这里的端口是127.0.0.1:9600,这是无法和远程机器通信的

  • 修改logstash配置文件:
[root@lzx1 bin]# vim /etc/logstash/logstash.yml
http.host: "192.168.100.160"         //在# http.host: "127.0.0.1" 下增加该行
  • 重启服务:
[root@lzx1 bin]# systemctl restart logstash
[root@lzx1 bin]# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 192.168.100.160:27019   0.0.0.0:*               LISTEN      885/mongod
tcp        0      0 127.0.0.1:27019         0.0.0.0:*               LISTEN      885/mongod
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      794/nginx: master p
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      750/sshd
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      867/master
tcp6       0      0 192.168.100.160:9200    :::*                    LISTEN      755/java
tcp6       0      0 :::10514                :::*                    LISTEN      1161/java
tcp6       0      0 192.168.100.160:9300    :::*                    LISTEN      755/java
tcp6       0      0 :::22                   :::*                    LISTEN      750/sshd
tcp6       0      0 ::1:25                  :::*                    LISTEN      867/master
tcp6       0      0 192.168.100.160:9600    :::*                    LISTEN      1161/java       //现在是192.168.100.160:9200
  • 到lzx上查看是否有索引:
之前在配置文件中定义了索引

[root@lzx ~]# curl '192.168.100.150:9200/_cat/indices?v'       //获取索引信息
health status index                 uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   system-syslog-2018.10 NUEz10LGT1uvVNhn6yUw3g   5   1         32            0    315.6kb        153.6kb         //有该索引生成就说明logstash与es通信正常
  • 获取指定索引详细信息:
[root@lzx ~]# curl '192.168.100.150:9200/system-syslog-2018.10?pretty'     //获取指定索引详细信息
{
  "system-syslog-2018.10" : {
    "aliases" : { },
    "mappings" : {
      "doc" : {
        "properties" : {
          "@timestamp" : {
            "type" : "date"
          },
          "@version" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "facility" : {
            "type" : "long"
          },
          "facility_label" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "host" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "logsource" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "message" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "pid" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "priority" : {
            "type" : "long"
          },
          "program" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "severity" : {
            "type" : "long"
          },
          "severity_label" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "timestamp" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "type" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          }
        }
      }
    },
    "settings" : {
      "index" : {
        "creation_date" : "1538370063177",
        "number_of_shards" : "5",
        "number_of_replicas" : "1",
        "uuid" : "NUEz10LGT1uvVNhn6yUw3g",
        "version" : {
          "created" : "6040199"
        },
        "provided_name" : "system-syslog-2018.10"
      }
    }
  }
}

kibana上查看日志

  • 到kibana浏览器界面配置索引:

Management–>Kibana–>Index Patterns,填入system-syslog-2018.10

ELK入门-LMLPHP
点击Next step,选择文件夹

ELK入门-LMLPHP

点击Create index pattern

ELK入门-LMLPHP

点击Discover,可以看到lzx1上的日志

ELK入门-LMLPHP

在命令行下查看,时间也是对应的

[root@lzx1 bin]# tail -f /var/log/messages
Oct  1 22:25:43 lzx1 logstash: [2018-10-01T22:25:43,874][INFO ][logstash.inputs.syslog   ] Starting syslog tcp listener {:address=>"0.0.0.0:10514"}
Oct  1 22:25:43 lzx1 logstash: [2018-10-01T22:25:43,886][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
Oct  1 22:25:44 lzx1 logstash: [2018-10-01T22:25:44,543][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
Oct  1 22:33:55 lzx1 chronyd[512]: Source 106.39.20.237 replaced with 193.228.143.14
Oct  1 22:33:55 lzx1 rsyslogd: action 'action 0' resumed (module 'builtin:omfwd') [v8.24.0 try http://www.rsyslog.com/e/2359 ]
Oct  1 22:33:55 lzx1 rsyslogd: action 'action 0' resumed (module 'builtin:omfwd') [v8.24.0 try http://www.rsyslog.com/e/2359 ]
Oct  1 22:33:55 lzx1 logstash: [2018-10-01T22:33:55,836][INFO ][logstash.inputs.syslog   ] new connection {:client=>"192.168.100.160:59018"}
Oct  1 23:01:05 lzx1 systemd: Started Session 3 of user root.
Oct  1 23:01:05 lzx1 systemd: Starting Session 3 of user root.
Oct  1 23:04:16 lzx1 chronyd[512]: Source 193.228.143.14 replaced with 193.228.143.13
Oct  1 23:11:58 lzx1 systemd: Started Session 4 of user root.
Oct  1 23:11:58 lzx1 systemd: Starting Session 4 of user root.
Oct  1 23:11:58 lzx1 systemd-logind: New session 4 of user root.
Oct  1 23:12:04 lzx1 systemd-logind: Removed session 4.

再做一个测试,在lzx上登录lzx1,

[root@lzx ~]# ssh 192.168.100.160
Enter passphrase for key '/root/.ssh/id_rsa':
Last login: Mon Oct  1 23:11:58 2018 from 192.168.100.170
[root@lzx1 ~]# logout
Connection to 192.168.100.160 closed.

到lzx1上查看日志

Oct  1 23:23:58 lzx1 systemd: Started Session 5 of user root.
Oct  1 23:23:58 lzx1 systemd-logind: New session 5 of user root.
Oct  1 23:23:58 lzx1 systemd: Starting Session 5 of user root.
Oct  1 23:24:03 lzx1 kernel: sched: RT throttling activated
Oct  1 23:25:56 lzx1 systemd-logind: Removed session 5.

刷新浏览器界面查看

ELK入门-LMLPHP
可以看到日志已经显示在浏览器界面

收集nginx日志

上面配置了收集系统日志,那么接下来我们配置收集nginx日志。

  • 编辑配置文件:
[root@lzx1 bin]# vim /etc/logstash/conf.d/nginx.conf          //写入下面内容
input {
  file {
    path => "/tmp/elk_access.log"
    start_position => "beginning"        //指定开始收集位置
    type => "nginx"
  }           //这里直接指定一个文件,把文件内容作为logstash日志输入
}
filter {
    grok {
        match => { "message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"}
    }         //定义输出格式,这里如果定义格式,nginx的日志也要定义格式
    geoip {
        source => "clientip"
    }
}
output {
    stdout { codec => rubydebug }
    elasticsearch {
        hosts => ["192.168.100.160:9200"]
        index => "nginx-test-%{+YYYY.MM.dd}"
  }
}       //定义输出部分
  • 检查配置文件:
[root@lzx1 bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf --config.test_and_exit
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2018-10-01T23:45:01,238][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK         //配置没问题
[2018-10-01T23:45:11,827][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
  • 编辑nginx虚拟主机配置文件:
如果没有nginx,yum安装即可,这里我之前有源码安装nginx

[root@lzx1 bin]# cd /usr/local/nginx/conf/vhost/
[root@lzx1 vhost]# vim elk.conf          //写入下面内容
server {
            listen 80;
            server_name elk.lzx.com;

            location / {
                proxy_pass      http://192.168.100.150:5601;         //指定代理目标
                proxy_set_header Host   $host;
                proxy_set_header X-Real-IP      $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            }
            access_log  /tmp/elk_access.log main2;       //定义访问日志,前面有定义日志路径,日志格式为main2
        }
  • 编辑nginx配置文件:
[root@lzx1 vhost]# vim /usr/local/nginx/conf/nginx.conf      //添加下面内容
    log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$upstream_addr" $request_time';          //在 sendfile on; 这一行上面添加

#       include vhost/*.conf;        //去掉前面#

检查配置文件

[root@lzx1 vhost]# /usr/local/nginx/sbin/nginx -t       //检查配置文件
nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful
[root@lzx1 vhost]# /usr/local/nginx/sbin/nginx -s reload         //重载配置
  • 浏览器访问:

访问之前先在Windows上的C:\Windows\System32\drivers\etc\hosts文件中添加一行:

192.168.100.160  elk.lzx.com

保存后输入elk.lzx.com就可以访问kibana界面了

ELK入门-LMLPHP

  • 查看访问日志:

能正常访问就有日志生成,查看一下

[root@lzx1 vhost]# ls /tmp/elk_access.log
/tmp/elk_access.log
[root@lzx1 vhost]# wc -l !$
wc -l /tmp/elk_access.log
157 /tmp/elk_access.log
[root@lzx1 vhost]# cat !$
cat /tmp/elk_access.log

elk.lzx.com 192.168.100.1 - - [02/Oct/2018:22:05:09 -0400] "GET /favicon.ico HTTP/1.1" 404 80 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.26 Safari/537.36 Core/1.63.6756.400 QQBrowser/10.3.2473.400" "192.168.100.150:5601" 0.113
elk.lzx.com 192.168.100.1 - - [02/Oct/2018:22:05:15 -0400] "GET /favicon.ico HTTP/1.1" 404 80 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.26 Safari/537.36 Core/1.63.6756.400 QQBrowser/10.3.2473.400" "192.168.100.150:5601" 0.062
elk.lzx.com 192.168.100.1 - - [02/Oct/2018:22:09:16 -0400] "GET /favicon.ico HTTP/1.1" 404 80 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.26 Safari/537.36 Core/1.63.6756.400 QQBrowser/10.3.2473.400" "192.168.100.150:5601" 0.033
elk.lzx.com 192.168.100.1 - - [02/Oct/2018:22:09:16 -0400] "GET /favicon.ico HTTP/1.1" 404 80 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.26 Safari/537.36 Core/1.63.6756.400 QQBrowser/10.3.2473.400" "192.168.100.150:5601" 0.039
elk.lzx.com 192.168.100.1 - - [02/Oct/2018:22:09:17 -0400] "GET /favicon.ico HTTP/1.1" 404 80 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.26 Safari/537.36 Core/1.63.6756.400 QQBrowser/10.3.2473.400" "192.168.100.150:5601" 0.042
elk.lzx.com 192.168.100.1 - - [02/Oct/2018:22:09:17 -0400] "GET /favicon.ico HTTP/1.1" 404 80 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.26 Safari/537.36 Core/1.63.6756.400 QQBrowser/10.3.2473.400" "192.168.100.150:5601" 0.033           //截取部分,这些访问日志就是我们需要的
  • 到lzx上检查是否有索引生成:
[root@lzx ~]# curl '192.168.100.150:9200/_cat/indices?v'
health status index                 uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   nginx-test-2018.10.02 -BRyeAQmRueqkQmSRQC4bg   5   1       7553            0        2mb       1014.2kb
green  open   nginx-test-2018.10.03 Fbh6ooCgQSqqwuVJKrw5FQ   5   1      17026            0      4.4mb          2.2mb          //这里分别是两天的日志
green  open   .kibana               ngh8OGEuRUS5EJ9R53Ycww   1   1          2            0     21.6kb         10.8kb
green  open   system-syslog-2018.10 NUEz10LGT1uvVNhn6yUw3g   5   1      24811            0      6.3mb          3.2mb

如果没有日志生成,重启logstash服务

  • 到kibana界面配置:

和之前配置系统日志时步骤相同,左侧点击Managerment–> Index Patterns–> Create Index Pattern

ELK入门-LMLPHP

填入 nginx-test-* ,不要带具体日期,点击Next step–>Create index pattern,然后点击Discover,选择 nginx-test-* ,查看日志

ELK入门-LMLPHP

使用beats采集日志

前面有提到,beats是一个轻量的日志采集器,logstash相对来说比较占用资源。

了解:https://www.elastic.co/cn/products/beats ,成员有:Filebeat(日志文件)、Metricbeat(指标)、Packetbeat(网络数据)、Winlogbeat(Windows事件日志)、Auditbeat(审计数据)、Heartbeat(运行时间监控)。同时它是可扩展的,支持自定义构建。

  • 下载安装:

lzx2上执行

[root@lzx2 ~]# yum install -y filebeat
  • 编辑配置文件:
[root@lzx2 ~]# vim /etc/filebeat/filebeat.yml       //做下面更改
enabled: false       改为      #  enabled: false
  paths:                                    paths:
    - /var/log/*.log        改为              - /var/log/messages            //Filebeat inputs部分

output.elasticsearch:       改为      #output.elasticsearch:
hosts: ["localhost:9200"]       改为      #hosts: ["localhost:9200"]        //Elasticsearch output部分

output.console:
  enable: true        //增加这两行,注意空格,Outputs部分
  • 前台启动查看日志:
[root@lzx2 ~]# /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml           //前台启动查看日志

同时,lzx1 SSH登录 lzx2

[root@lzx1 vhost]# ssh lzx2
The authenticity of host 'lzx2 (192.168.100.170)' can't be established.
ECDSA key fingerprint is SHA256:teKu3atU+OByPeXXD2xXhyb30vg6nW8ETqqCr785Dbc.
ECDSA key fingerprint is MD5:13:a4:f1:c0:1f:62:65:d4:f4:4e:42:ab:40:f1:36:60.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'lzx2,192.168.100.170' (ECDSA) to the list of known hosts.
root@lzx2's password:
Last login: Tue Oct  2 21:57:43 2018 from 192.168.100.1
[root@lzx2 ~]# logout
Connection to lzx2 closed.

再查看lzx2显示

{"@timestamp":"2018-10-03T04:53:20.529Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.4.2"},"input":{"type":"log"},"host":{"name":"lzx2"},"beat":{"name":"lzx2","hostname":"lzx2","version":"6.4.2"},"source":"/var/log/messages","offset":128846,"message":"Oct  3 00:37:22 lzx2 chronyd[509]: Selected source 106.187.100.179","prospector":{"type":"log"}}
{"@timestamp":"2018-10-03T04:53:20.529Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.4.2"},"input":{"type":"log"},"beat":{"name":"lzx2","hostname":"lzx2","version":"6.4.2"},"host":{"name":"lzx2"},"source":"/var/log/messages","offset":128913,"message":"Oct  3 00:37:24 lzx2 chronyd[509]: Source 5.79.108.34 replaced with 85.199.214.100","prospector":{"type":"log"}}
{"@timestamp":"2018-10-03T04:53:35.531Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.4.2"},"prospector":{"type":"log"},"input":{"type":"log"},"beat":{"name":"lzx2","hostname":"lzx2","version":"6.4.2"},"host":{"name":"lzx2"},"source":"/var/log/messages","offset":128996,"message":"Oct  3 00:53:28 lzx2 systemd-logind: New session 5 of user root."}
{"@timestamp":"2018-10-03T04:53:35.531Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.4.2"},"beat":{"name":"lzx2","hostname":"lzx2","version":"6.4.2"},"host":{"name":"lzx2"},"offset":129061,"message":"Oct  3 00:53:29 lzx2 systemd: Started Session 5 of user root.","source":"/var/log/messages","prospector":{"type":"log"},"input":{"type":"log"}}
{"@timestamp":"2018-10-03T04:53:35.531Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.4.2"},"source":"/var/log/messages","offset":129123,"message":"Oct  3 00:53:29 lzx2 systemd: Starting Session 5 of user root.","prospector":{"type":"log"},"input":{"type":"log"},"beat":{"hostname":"lzx2","version":"6.4.2","name":"lzx2"},"host":{"name":"lzx2"}}
{"@timestamp":"2018-10-03T04:53:50.533Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.4.2"},"host":{"name":"lzx2"},"beat":{"name":"lzx2","hostname":"lzx2","version":"6.4.2"},"source":"/var/log/messages","offset":129186,"message":"Oct  3 00:53:48 lzx2 systemd-logind: Removed session 5.","prospector":{"type":"log"},"input":{"type":"log"}}

上面简单测试日志在前台显示没问题,

  • 编辑配置文件:
[root@lzx2 ~]# ls /var/log/elasticsearch/lzxlinux.log       //这是elasticsearch日志文件
/var/log/elasticsearch/lzxlinux.log
[root@lzx2 ~]# vim /etc/filebeat/filebeat.yml        //做下面更改
  paths:                                      paths:
    - /var/log/messages          改为           - /var/log/elasticsearch/lzxlinux.log        //Filebeat inputs部分

output.console:
  enable: true         //删除这两行,Outputs部分

#output.elasticsearch:       改为      output.elasticsearch:
#hosts: ["localhost:9200"]       改为      hosts: ["192.168.100.150:9200"]        //Elasticsearch output部分
  • 启动服务:
[root@lzx2 ~]# systemctl start filebeat
[root@lzx2 ~]# ps aux |grep filebeat
root      10206  0.2  1.0 377268 19124 ?        Ssl  01:28   0:00 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat
  • 到lzx上查看是否生成新的索引:
[root@lzx ~]# !curl
curl '192.168.100.150:9200/_cat/indices?v'
health status index                     uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   nginx-test-2018.10.02     -BRyeAQmRueqkQmSRQC4bg   5   1       7553            0        1mb            1mb
yellow open   nginx-test-2018.10.03     Fbh6ooCgQSqqwuVJKrw5FQ   5   1      52791            0      6.6mb          6.6mb
yellow open   filebeat-6.4.2-2018.10.03 dZIeTzJ8QBO6UCkmWchBZw   3   1        616            0        205.8kb        205.8kb       //新的索引生成
green  open   .kibana                   ngh8OGEuRUS5EJ9R53Ycww   1   0          3            0     17.4kb         17.4kb
yellow open   system-syslog-2018.10     NUEz10LGT1uvVNhn6yUw3g   5   1      59232            0      7.4mb          7.4mb
  • 到kibana界面配置:

和上面配置nginx一样的步骤,配置索引,到Discover中查看

ELK入门-LMLPHP

这就是elasticsearch的日志文件。


更多资料参考:
快速搭建ELK日志分析系统
ELK日志系统浅析与部署

10-03 14:33