以下环境都是centos7.2以上
elk的版本更新很快,从2.x,一下子跳到5.x了,更新的内容也有很多,像es的mob可视化head插件现在不能直接安装在es中,需要单独安装elasticsearch-head,
现在收集日志的工具有很多,除了logstash之后,还有filebeat,metricbeat(指标收集),packetbeat(网络数据),winlogbeat(windows事件日志),heartbeat(运行时间监控)
安装和配置
需要安装java环境,可以下载rpm包直接安装,安装很简单 可以参考:https://www.elastic.co/guide/en/logstash/current/installing-logstash.html
elk安装官方文档:https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html
jvm大小配置:
在配置文件jvm.options里可以配置,
[root@node3 elasticsearch]# egrep "\-Xms|\-Xmx" jvm.options ## -Xms4g ## -Xmx4g -Xms256m -Xmx256m [root@node3 elasticsearch]# pwd /etc/elasticsearch [root@node3 elasticsearch]# ls elasticsearch.yml jvm.options log4j2.properties scripts [root@node3 elasticsearch]#
es的head可视化插件安装
可以用docker来启动,在web界面直接指定es的地址就可以查看了。
docker run -d --name mob -p 9100:9100 mobz/elasticsearch-head:5
logstash收集日志的时候注意事项
- 由于logstash运行用户是logstash,你如果用input来收集的话,需要确保对日志有查看权限。
修改其他用户启动,可以在启动文件中修改
[root@node3 bin]# cat /etc/systemd/system/logstash.service [Unit] Description=logstash [Service] Type=simple User=logstash Group=logstash # Load env vars from /etc/default/ and /etc/sysconfig/ if they exist. # Prefixing the path with '-' makes it try to load, but if the file doesn't # exist, it continues onward. EnvironmentFile=-/etc/default/logstash EnvironmentFile=-/etc/sysconfig/logstash ExecStart=/usr/share/logstash/bin/logstash "--path.settings" "/etc/logstash" Restart=always WorkingDirectory=/ Nice=19 LimitNOFILE=16384 [Install] WantedBy=multi-user.target [root@node3 bin]#
es5.x修改用root启动的方法
环境是centos7.2,修改 启动可以配置启动文件,
[root@node3 bin]# egrep -i "user|group" /usr/lib/systemd/system/elasticsearch.service User=elasticsearch Group=elasticsearch # Send the signal only to the JVM rather than its control group WantedBy=multi-user.target [root@node3 bin]#
es是不建议修改root用户来运行的。
filebeat安装
- filebeat 不需要安装java环境,
- 下载rpm包就可以直接安装。下载地址
- filebeat只是一个二进制文件,没有任何依赖,所以占用资源极少
filebeat直接输送给es
样本配置如下:
[root@docker filebeat]# grep -vE "#|^$" filebeat.yml filebeat.prospectors: - input_type: log paths: - /var/log/messages fields: log_source: system-211 - input_type: log paths: - /var/log/supervisor/withdraw_nsqsub.log fields: log_source: 211-withdraw_nsqsub - input_type: log paths: - /var/log/supervisor/gateway_web.log fields: log_source: 211-gateway - input_type: log paths: - /var/log/nginx/*.log fields: log_source: 211-nginx output.elasticsearch: hosts: ["192.168.7.232:9200"] [root@docker filebeat]#
filebeat输送给logstash
可以参考官网的样例:https://www.elastic.co/guide/en/logstash/5.6/logstash-config-for-filebeat-modules.html
- 首先,logstash需要安装input-beats插件
./logstash-plugin install logstash-input-beats
- logstash所在的机器需要启动5044端口的话,需要在conf.d下的增加一个配置文件,如下,
[root@node3 conf.d]# cat file.conf input { file { path => "/var/log/messages" type => "system" start_position => "beginning" } } output { elasticsearch { hosts => ["192.168.7.232:9200"] index => "system-232-%{+YYYY.MM.dd}" } } input{ file { path => "/var/log/nginx/access_www.test.com.log" codec => "json" type => "nginx-access" start_position => "beginning" } } output{ elasticsearch { hosts => ["192.168.7.232:9200"] index => "nginx-access-232-%{+YYYY.MM.dd}" } } input { beats { port => 5044 host => "0.0.0.0" } } output { elasticsearch { hosts => ["192.168.7.232:9200"] index => "beats-%{+YYYY.MM.dd}" } }
conf.d目录下可以增加多个文件如:
[chenlc@mail conf.d]$ vim filebeat.conf input { beats { port => 5044 host => "192.168.8.99" } } output { #-----node2------------------------- if [type] == "gateway_zhifu-node2" { elasticsearch { hosts => ["192.168.8.99:9200"] index => "gateway_zhiu-node2-%{+YYYY.MM.dd}" } } if [type] == "notify_nsqsub-node2" { elasticsearch { hosts => ["192.168.8.99:9200"] index => "notify_nsqsub-node2-%{+YYYY.MM}" } } } #第二个文件 chenlc@mail conf.d]$ cat jinpay.conf input { beats { port => 5044 host => "39.109.11.99" } } output { #jipay node1 if [type] == "gateway_jinpay-node1" { elasticsearch { hosts => ["192.168.8.99:9200"] index => "gateway_jinpay-node1-%{+YYYY.MM.dd}" } } if [type] == "notify_nsqsub-jingpay-node1" { elasticsearch { hosts => ["192.168.8.99:9200"] index => "notify_nsqsub-jingpay-node1-%{+YYYY.MM}" } } }
- logstash配置了beats输入之后,就会启动相应配置的端口(默认用的端口是5044),这样filebeats就可以用这个端口来连接logstash了。
filebeat从头开始读文件
filebeat如果想从头开始读取文件,你删除索引是没有用的,需要删除fealbeat里的相关记录,别人的回答如下:
你看看在你的filebeat执行路径下面是不是有一个registry文件,例如 /usr/local/bin/data/registry 这个文件记录了filebeat读取文件的位置信息,你把它删掉试试 lucky_girl • 2017-05-08 16:03 真的真的很感谢你提供给我的思路,我最开始把这个文件删除或者清空,它又自动还原回来了,我后来在记录位置文件里面把要重新写入数据的日志文件记录的相应信息删除了,发现可以了从头开始写入了 tujunlan • 2017-06-06 16:26 请问你是怎么改的,我试过删文件,删需要重读日志的文件信息,修改要重读文件的offset都不管用 tujunlan • 2017-06-06 16:37 知道原因了,必须先停止filebeat,再删才有用
参考:https://elasticsearch.cn/question/1526
docker启动kibana
命令如下:
docker run --name kibana -e ELASTICSEARCH_URL=http://192.168.7.232:9200 -p 5601:5601 -d kibana:5.5.2
es索引相关配置
filebeat可以配置索引
当有多台机器的时候,每台机器至少要用一个索引吧,在output上添加index关键字就可以,所以需要配置 配置样例如下:
[root@node1 tools]# egrep -v "#|^$" /etc/filebeat/filebeat.yml filebeat.prospectors: - input_type: log paths: - /var/log/nginx/*.log fields: log_source: 230-nginx-log output.elasticsearch: hosts: ["192.168.7.232:9200"] index: "230-filebeat" [root@node1 tools]#
ELK+filebeat具体实践
- 系统:centos 7.2
- es:是启动两个docker组成的集群
- logstash:负责收集filebeat发送过来的数据,并对不同的数据类型建立相应的索引,logstash是占用系统资源比较大的, 所以不宜在每台机器上安装,收集文件类的日志可以用filebeat就可以了。
- filebeat收集日志
elasticsearch是用docker安装
两个节点的启动脚本:
docker run -d \ --name es1 \ -p 192.168.8.99:9200:9200 \ -p 192.168.8.99:9300:9300 \ --network clc \ --ip 172.18.0.150 \ -v /etc/localtime:/etc/localtime:ro \ -v /data/docker_work/elasticsearch1/config:/usr/share/elasticsearch/config \ -v /data/docker_work/elasticsearch1/logs:/usr/share/elasticsearch/logs \ -v /data/docker_work/elasticsearch1/data:/usr/share/elasticsearch/data \ -v /data/docker_work/elasticsearch1/plugins:/usr/share/elasticsearch/plugins \ elasticsearch:5.5.2 docker run -d \ --name es2 \ -p 192.168.8.99:9201:9200 \ -p 192.168.8.99:9301:9300 \ --network clc \ --ip 172.18.0.151 \ -v /etc/localtime:/etc/localtime:ro \ -v /data/docker_work/elasticsearch2/config:/usr/share/elasticsearch/config \ -v /data/docker_work/elasticsearch2/logs:/usr/share/elasticsearch/logs \ -v /data/docker_work/elasticsearch2/data:/usr/share/elasticsearch/data \ -v /data/docker_work/elasticsearch2/plugins:/usr/share/elasticsearch/plugins \ elasticsearch:5.5.2
es1的配置文件:
[c@node elasticsearch1]$ grep -Ev "#|^$" config/elasticsearch.yml cluster.name: elk node.name: node-1 path.logs: /usr/share/elasticsearch/logs network.host: 0.0.0.0 http.port: 9200 discovery.zen.ping.unicast.hosts: ["172.18.0.150", "172.18.0.151"]
es2的配置文件:
[c@node docker_work]$ grep -Ev "#|^$" elasticsearch2/config/elasticsearch.yml cluster.name: elk node.name: node-2 path.logs: /usr/share/elasticsearch/logs network.host: 0.0.0.0 http.port: 9200 discovery.zen.ping.unicast.hosts: ["172.18.0.150", "172.18.0.151"] [chenlc@zhifu-node3 docker_work]$
映射的目录结构说明:
[c@node elasticsearch1]$ tree -L 2 . ├── config │ ├── elasticsearch.yml #主配置文件 │ ├── jvm.options #配置jvm相关信息,如xmx及xms的大小 │ ├── log4j2.properties │ └── scripts ├── data │ └── nodes ├── logs ├── plugins └── start.sh
es方便查看的插件head安装
es2.x是可以直接安装的,es5.x以下不能直接安装,需要单独安装,用docker启动安装的,启动脚本如下:
docker run -d \ --name mobz \ -p 192.168.8.99:9100:9100 \ -v /etc/localtime:/etc/localtime:ro \ -v /data/docker_work/es_mobz/config_url/app.js:/usr/src/app/_site/app.js \ #这个里面是 \ 有一个配置es地址的选项,其实没有什么用,因为web界面也是可以更改的,所以这个可以不需要的。 mobz/elasticsearch-head:5
kibana安装
kibana是用docker安装的,脚本如下:
docker run -d \ --name kibana \ --network clc \ --ip 172.18.0.152 \ -e ELASTICSEARCH_URL=http://172.18.0.150:9200 \ -v /etc/localtime:/etc/localtime:ro \ -p 192.168.8.99:5601:5601 \ kibana:5.5.2
logstash安装与配置
安装java环境,现在还不支持1.9的java,官方说明如下(2017/10/16):
Logstash requires Java 8. Java 9 is not supported. \ Use the official Oracle distribution or an open-source distribution such as OpenJDK.
下载一下java二进制的就可以,java的环境变量配置如下:
export JAVA_HOME=/usr/local/jdk export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH export CLASSPATH=.$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$JAVA_HOME/lib/tools.j
安装logstash,参考官文文档
#Download and install the public signing key: rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch #Add the following in your /etc/yum.repos.d/ directory in a file with a .repo suffix, for example logstash.repo [logstash-5.x] name=Elastic repository for 5.x packages baseurl=https://artifacts.elastic.co/packages/5.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md #And your repository is ready for use. You can install it with: sudo yum install logstash
启动如果有问题,看日志就可以了,可能是java的bin目录的程序不能获取,可以对它做一个软链接。
logstash的配置文件如下:
[c@node logstash]$ grep -vE "#|^$" logstash.yml path.data: /var/lib/logstash path.config: /etc/logstash/conf.d log.level: warn path.logs: /var/log/logstash [c@node logstash]$ cat conf.d/filebeat.conf input { beats { port => 5044 #这个是filebeat配置时的out,启动之后会产生这个端口,忘了是否需要安装filebeat插件。 host => "192.168.8.99" } } output { #-----node2------------------------- if [type] == "gateway-node2" { #根据filebeat里配置的document_type类型来区分不同的日志,从而来创建日志 elasticsearch { hosts => ["192.168.8.99:9200"] index => "gateway_zhiu-node2-%{+YYYY.MM}" } } if [type] == "notify_nsqsub-node2" { elasticsearch { hosts => ["192.168.8.99:9200"] index => "notify_nsqsub-node2-%{+YYYY.MM}" } } }
filebeat安装与配置
- filebeat 不需要安装java环境,
- 下载rpm包就可以直接安装。下载地址
- filebeat只是一个二进制文件,没有任何依赖,所以占用资源极少
- 这个网址有所有elk相关的软件的说明,https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html
filebeat配置
[chenlc@zhifu-node2 filebeat]$ grep -vE "#|^$" filebeat.yml filebeat.prospectors: - input_type: log paths: - /var/log/messages document_type: message-node2 #增加这个可以输出给,logstash作为条件判断,从而为不同的日志创建不同的索引, fields: log_source: message-node2 - input_type: log paths: - /var/log/mysqld.log document_type: mysqld-node2 fields: log_source: mysqld-node2 - input_type: log paths: **** - /var/log/mysql_slow.log/mysql_slow.log document_type: mysql_slow-node2 fields: log_source: mysql_slow-node2 - input_type: log paths: - /var/log/secure* document_type: secure-node2 fields: log_source: secure-node2 - input_type: log paths: - /var/log/cron* document_type: cron-node2 fields: log_source: cron-node2 filebeat.registry_file: ${path.data}/registry output.logstash: hosts: ["192.168.8.99:5044"] logging.level: warning logging.to_files: true logging.files: path: /var/log/filebeat keepfiles: 10
原文链接:ELK(新版本5.x)实践,转载请注明来源!