ELK部署

ELK部署架构

ELK流程部署图.png
ELK流程部署图.png

ELK集群(原生简化)

主机环境

主机名 IP 操作系统 角色
node1 192.168.100.101 CentOS7 Logstash
node2 192.168.100.102 CentOS7 Elasticsearch-主节点
Elasticsearch-数据节点
node3 192.168.100.103 CentOS7 Elasticsearch-数据节点
node4 192.168.100.104 CentOS7 Kibana
node5 192.168.100.105 CentOS7 Redis

先决条件

安装jdk1.8(node1、node2、node3、node4)

1
2
$ yum localinstall jdk-8u211-linux-x64.rpm
$ java -version

修改文件限制(node1、node2、node3、node4)

1
2
3
4
5
6
$ vi /etc/security/limits.conf
# 增加以下内容
* soft nofile 65536
* hard nofile 65536
* soft nproc 2048
* hard nproc 4096

调整虚拟内存&最大并发连接(node1、node2、node3、node4)

1
2
3
4
$ vi /etc/sysctl.conf
#增加的内容
vm.max_map_count=655360
fs.file-max=655360

重启(node1、node2、node3、node4)

1
$ reboot

安装Redis(node5)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# 安装依赖
$ yum install gcc gcc-c++ -y
$ cd /usr/local
$ wget http://download.redis.io/releases/redis-5.0.5.tar.gz
$ tar zxvf redis-5.0.5.tar.gz
$ cd redis-5.0.5
$ make
$ make install
# 设置开机启动
$ ./utils/install_server.sh
# 测试安装
$ netstat -nltp|grep redis
$ ./redis-cli
127.0.0.1:6379> ping
PONG
127.0.0.1:6379>
# 启动Redis
$ service redis_6379 start
$ chkconfig redis_6379 on
$ vi /etc/redis/6379.conf
# 修改成如下内容
#bind 127.0.0.1
protected-mode no

# 重启Redis
$ service redis_6379 restart

安装Elasticsearch

创建ELK用户及目录(node2、node3)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# Elasticsearch、Logstash、Kibana均不能以root账号运行,所以需要创建专门的账户。
$ useradd elk
# 创建ELKAPP目录并设置所有者
$ mkdir /usr/local/elkApp
# 创建ELK数据目录并设置所有者
$ mkdir /usr/local/elkData
# 创建Elasticsearch主目录
$ mkdir -p /usr/local/elkData/es
# 创建Elasticsearch数据目录
$ mkdir -p /usr/local/elkData/es/data
# 创建Elasticsearch日志目录
$ mkdir -p /usr/local/elkData/es/log
# 设置目录权限
$ chown -R elk:elk /usr/local/elkApp
$ chown -R elk:elk /usr/local/elkData
##### 合并版命令 #####
$ useradd elk && mkdir /usr/local/elkApp && mkdir /usr/local/elkData && mkdir -p /usr/local/elkData/es && mkdir -p /usr/local/elkData/es/data && mkdir -p /usr/local/elkData/es/log && chown -R elk:elk /usr/local/elkApp && chown -R elk:elk /usr/local/elkData

下载ELK包(node2、node3)

1
2
3
4
5
6
7
8
$ cd /usr/local
$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.2.0-linux-x86_64.tar.gz
$ tar -zxvf elasticsearch-7.2.0-linux-x86_64.tar.gz
$ mv elasticsearch-7.2.0 elkApp/
$ cd elkApp
$ ln -s elasticsearch-7.2.0 elasticsearch
##### 合并版命令 #####
$ tar -zxvf elasticsearch-7.2.0-linux-x86_64.tar.gz && mv elasticsearch-7.2.0 elkApp/ && cd elkApp && ln -s elasticsearch-7.2.0 elasticsearch

配置Elasticsearch

配置文件在/usr/local/elkApp/elasticsearch-7.2.0/config/elasticsearch.yml中。

配置主节点elasticsearch.yml(node2)

1
2
3
4
5
6
7
8
9
10
11
12
13
bootstrap.memory_lock: true
cluster.name: es
node.name: es1
path.data: /usr/local/elkData/es/data
path.logs: /usr/local/elkData/es/logs
network.host: 192.168.100.102
http.port: 9200
transport.host: localhost
transport.tcp.port: 9300
node.master: true
node.data: true
discovery.zen.ping.unicast.hosts: ["192.168.100.102:9300","192.168.100.103:9300"]
discovery.zen.minimum_master_nodes: 1

配置数据节点elasticsearch.yml(node3)

1
2
3
4
5
6
7
8
9
10
11
12
13
bootstrap.memory_lock: true
cluster.name: es
node.name: es2
path.data: /usr/local/elkData/es/data
path.logs: /usr/local/elkData/es/logs
network.host: 192.168.100.103
http.port: 9200
transport.host: localhost
transport.tcp.port: 9300
node.master: false
node.data: true
discovery.zen.ping.unicast.hosts: ["192.168.100.102:9300","192.168.100.103:9300"]
discovery.zen.minimum_master_nodes: 1

配置开机启动(node2、node3)

1
2
3
4
5
6
7
8
9
10
11
12
$ vi /etc/systemd/system/elasticsearch.service
# 内容如下
[Unit]
Description=elasticsearch
[Service]
User=elk
Group=elk
LimitNOFILE=100000
LimitNPROC=100000
ExecStart=/usr/local/elkApp/elasticsearch/bin/elasticsearch
[Install]
WantedBy=multi-user.target
1
2
3
4
5
$ systemctl daemon-reload
$ systemctl start elasticsearch
$ systemctl enable elasticsearch
##### 合并版命令 #####
$ systemctl daemon-reload && systemctl start elasticsearch && systemctl enable elasticsearch

自测

1
2
3
4
5
6
7
8
9
# 方法一:查看是否有9200的端口这一行输出
$ netstat -plntu
# 方法二:查看elasticsearch进程
$ ps -ef | grep elasticsearch
# 方法三:浏览器访问测试
# 查看单个集群节点状态
http://192.168.100.102:9200
# 查看集群健康状态
http://192.168.100.102:9200/_cluster/health

安装Logstash(node1)

创建ELK用户及目录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# Elasticsearch、Logstash、Kibana均不能以root账号运行,所以需要创建专门的账户。
$ useradd elk
# 创建ELKAPP目录并设置所有者
$ mkdir /usr/local/elkApp
# 创建ELK数据目录并设置所有者
$ mkdir /usr/local/elkData
# 创建logstash主目录
$ mkdir -p /usr/local/elkData/logstash
# 创建logstash数据目录
$ mkdir -p /usr/local/elkData/logstash/data
# 创建logstash日志目录
$ mkdir -p /usr/local/elkData/logstash/log
# 设置目录权限
$ chown -R elk:elk /usr/local/elkApp
$ chown -R elk:elk /usr/local/elkData
##### 合并版命令 #####
$ useradd elk && mkdir /usr/local/elkApp && mkdir /usr/local/elkData && mkdir -p /usr/local/elkData/logstash && mkdir -p /usr/local/elkData/logstash/data && mkdir -p /usr/local/elkData/logstash/log && chown -R elk:elk /usr/local/elkApp && chown -R elk:elk /usr/local/elkData

安装Logstash

1
2
3
4
5
6
7
8
9
10
11
$ cd /usr/local
$ wget https://artifacts.elastic.co/downloads/logstash/logstash-7.2.0.tar.gz
$ tar -zxvf logstash-7.2.0.tar.gz
$ mv logstash-7.2.0 elkApp/
$ cd elkApp
$ ln -s logstash-7.2.0 logstash
# 设置目录权限
$ chown -R elk:elk /usr/local/elkApp
$ chown -R elk:elk /usr/local/elkData
##### 合并版命令 #####
$ tar -zxvf logstash-7.2.0.tar.gz && mv logstash-7.2.0 elkApp/ && cd elkApp && ln -s logstash-7.2.0 logstash && chown -R elk:elk /usr/local/elkApp && chown -R elk:elk /usr/local/elkData

配置Logstash

1
2
3
4
$ vi /usr/local/elkApp/logstash-7.2.0/config/logstash.yml
#增加以下内容
path.data: /usr/local/elkData/logstash/data
path.logs: /usr/local/elkData/logstash/log

配置输入输出

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
$ vi /usr/local/elkApp/logstash-7.2.0/config/input-output.conf
# 内容如下
input {
redis {
data_type => "list"
key => "logstash"
host => "192.168.100.105"
port => 6379
threads => 5
codec => "json"
}
}
filter {
}
output {
elasticsearch {
hosts => ["192.168.100.102:9200","192.168.100.103:9200"]
index => "logstash-%{type}-%{+YYYY.MM.dd}"
document_type => "%{type}"
}
stdout {
}
}

配置pipelines.yml

1
2
3
4
5
$ vi /usr/local/elkApp/logstash-7.2.0/config/pipelines.yml
# 内容如下
- pipeline.id: id1
pipeline.workers: 1
path.config: "/usr/local/elkApp/logstash-7.2.0/config/input-output.conf"

配置开机启动

1
2
3
4
5
6
7
8
9
10
11
12
$ vi /etc/systemd/system/logstash.service
# 内容如下
[Unit]
Description=logstash
[Service]
User=elk
Group=elk
LimitNOFILE=100000
LimitNPROC=100000
ExecStart=/usr/local/elkApp/logstash/bin/logstash
[Install]
WantedBy=multi-user.target
1
2
3
4
5
$ systemctl daemon-reload
$ systemctl start logstash
$ systemctl enable logstash
##### 合并版命令 #####
$ systemctl daemon-reload && systemctl start logstash && systemctl enable logstash

自测

1
2
3
4
5
6
7
8
# 方法一:查看是否有9600的端口这一行输出
$ netstat -plntu
# 方法二:查看logstash进程
$ ps -ef | grep logstash
# 方法三:使用命令运行logstash
# -f 指定配置文件
# -t 参数指定配置文件检查配置是否正确
$ /usr/local/logstash/bin/logstash -f config/input-output.conf -t

安装Kibana(node4)

创建ELK用户及目录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# Elasticsearch、Logstash、Kibana均不能以root账号运行,所以需要创建专门的账户。
$ useradd elk
# 创建ELKAPP目录并设置所有者
$ mkdir /usr/local/elkApp
# 创建ELK数据目录并设置所有者
$ mkdir /usr/local/elkData
# 创建logstash主目录
$ mkdir -p /usr/local/elkData/kibana
# 创建logstash数据目录
$ mkdir -p /usr/local/elkData/kibana/data
# 创建logstash日志目录
$ mkdir -p /usr/local/elkData/kibana/log
# 设置目录权限
$ chown -R elk:elk /usr/local/elkApp
$ chown -R elk:elk /usr/local/elkData
##### 合并版命令 #####
$ useradd elk && mkdir /usr/local/elkApp && mkdir /usr/local/elkData && mkdir -p /usr/local/elkData/kibana && mkdir -p /usr/local/elkData/kibana/data && mkdir -p /usr/local/elkData/kibana/log && chown -R elk:elk /usr/local/elkApp && chown -R elk:elk /usr/local/elkData

安装Kibana

1
2
3
4
5
6
7
8
9
10
$ cd /usr/local
$ wget https://artifacts.elastic.co/downloads/kibana/kibana-7.2.0-linux-x86_64.tar.gz
$ mv kibana-7.2.0-linux-x86_64 elkApp/
$ cd elkApp
$ ln -s kibana-7.2.0-linux-x86_64 kibana
# 设置目录权限
$ chown -R elk:elk /usr/local/elkApp
$ chown -R elk:elk /usr/local/elkData
##### 合并版命令 #####
$ tar -zxvf kibana-7.2.0-linux-x86_64.tar.gz && mv kibana-7.2.0-linux-x86_64 elkApp/ && cd elkApp && ln -s kibana-7.2.0-linux-x86_64 kibana && chown -R elk:elk /usr/local/elkApp && chown -R elk:elk /usr/local/elkData

配置Kibana

1
2
3
4
5
$ vi /usr/local/elkApp/kibana-7.2.0-linux-x86_64/config/kibana.yml
# 增加如下内容
server.port: 5601
server.host: "192.168.100.104"
elasticsearch.hosts: ["http://192.168.100.102:9200"]

配置开机启动

1
2
3
4
5
6
7
8
9
10
11
12
$ vi /etc/systemd/system/kibana.service
# 内容如下
[Unit]
Description=kibana
[Service]
User=elk
Group=elk
LimitNOFILE=100000
LimitNPROC=100000
ExecStart=/usr/local/elkApp/kibana/bin/kibana
[Install]
WantedBy=multi-user.target
1
2
3
4
5
$ systemctl daemon-reload
$ systemctl start kibana
$ systemctl enable kibana
##### 合并版命令 #####
$ systemctl daemon-reload && systemctl start kibana && systemctl enable kibana

自测

1
2
3
4
5
6
7
8
# 方法一:查看是否有5601的端口这一行输出
$ netstat -plntu
# 方法二:查看kibana进程
$ ps -ef | grep kibana
# 方法三:浏览器访问测试
http://localhost:5601
# 方法四:使用命令运行kibana
$ /usr/local/elkApp/kibana/bin/kibana --allow-root

测试

测试日志写入(node5)

1
2
3
# 多写几条
$ redis-cli
127.0.0.1:6379> lpush logstash '{"host":"127.0.0.1","type":"logtest","message":"hello"}'

测试Kibana

kibana-1.png kibana-2.png kibana-3.png kibana-4.png

ELK集群(Docker-未完待续)

ELK集群(Docker-Compose)

参考docker-compose-elk

安装、配置Filebeat

下载:https://www.elastic.co/cn/downloads/beats/filebeat

安装Filebeat

Linux-解压缩

1
2
3
4
$ curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.2.0-linux-x86_64.tar.gz
$ tar -zxvf filebeat-7.2.0-linux-x86_64.tar.gz
$ cd filebeat-7.2.0-linux-x86_64
$ ./filebeat -e -c filebeat.yml

CentOS7-yum

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ sudo rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
$ vi /etc/yum.repos.d/elastic.repo
# 内容如下
[elastic-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

sudo yum install filebeat
sudo yum install filebeat

CentOS7-rpm

1
2
$ curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.2.0-x86_64.rpm
$ sudo rpm -vi filebeat-7.2.0-x86_64.rpm

Ubuntu-apt-get

1
2
3
4
5
6
7
8
9
# 导入签名Key
$ wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
$ sudo apt-get install apt-transport-https
# 保存仓库到/etc/apt/sources.list.d/elastic-7.x.list
$ echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
$ sudo apt-get update && sudo apt-get install filebeat
# 配置文件在/etc/filebeat/filebeat.yml
# 配置filebeat开机启动
$ sudo update-rc.d filebeat defaults 95 10

Ubuntu-deb

1
2
$ curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.2.0-amd64.deb
$ curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.2.0-amd64.deb

Docker

1
2
3
4
5
$ docker pull docker.elastic.co/beats/filebeat:7.2.0
$ docker run \
docker.elastic.co/beats/filebeat:7.2.0 \
setup -E setup.kibana.host=kibana:5601 \
-E output.elasticsearch.hosts=["elasticsearch:9200"]

参考Running Filebeat on Docker

配置Filebeat

说明

配置文件filebeat.yml。 可以通过以下命令创建日志文件,并进行内容追加,以便进行写入测试。

1
2
3
$ touch /usr/local/access-filebeat-test.log
$ echo "this msg is from logstash" >> /usr/local/access-filebeat-test.log
$ echo "this msg is from reids" >> /usr/local/access-filebeat-test.log

配置Filebeat写入Logstash

1
2
3
4
5
filebeat.prospectors:
enabled: true
- /usr/local/access-filebeat-test.log
output.logstash:
hosts: ["192.168.100.:5044"]

配置Filebeat写入Redis

1
2
3
4
5
6
7
8
9
filebeat.inputs:
- type: log
paths:
- /usr/local/access-filebeat-test.log
output.redis:
hosts: ["192.168.100.105"]
key: "filebeat"
db: 0
timeout: 5

配置Filebeat写入Kafka

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
filebeat.inputs:
- type: log
paths:
- /usr/local/access-filebeat-test.log
output.kafka:
# initial brokers for reading cluster metadata
hosts: ["kafka1:9092", "kafka2:9092", "kafka3:9092"]

# message topic selection + partitioning
topic: '%{[fields.log_topic]}'
partition.round_robin:
reachable_only: false

required_acks: 1
compression: gzip
max_message_bytes: 1000000

更多配置

参考Configuring Filebeat

配置Logstash

配置logstash采集数据

1
2
3
4
5
6
7
8
9
10
11
12
13
$ vi /usr/local/elkApp/logstash-7.2.0/config/filebeat.conf
# 内容如下
input {
beats {
port => 5044
}
}

output {
stdout {
codec => "rubydebug"
}
}

配置logstash从Redis采集数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
$ vi /usr/local/elkApp/logstash-7.2.0/config/redis.conf
# 内容如下
input {
redis {
data_type => "list"
key => "logstash"
host => "192.168.100.105"
port => 6379
threads => 5
codec => "json"
}
}
filter {
}
output {
elasticsearch {
hosts => ["192.168.100.102:9200","192.168.100.103:9200"]
index => "logstash-%{type}-%{+YYYY.MM.dd}"
document_type => "%{type}"
}
stdout {
}
}

配置pipelines.yml

1
2
3
4
5
$ vi /usr/local/elkApp/logstash-7.2.0/config/pipelines.yml
# 内容如下
- pipeline.id: id1
pipeline.workers: 1
path.config: "/usr/local/elkApp/logstash-7.2.0/config/input-output.conf"

常见错误

bootstrap checks failed

如果想在开发环境中运行elasticsearch,尽管失败的引导程序检查,在elasticsearch.yml设置中增加以下内容:

1
2
transport.host: 127.0.0.1
http.host: 0.0.0.0

请注意,无法在开发模式下形成群集。不要使用在生产中失败的bootstrap检查的elasticsearch。

AccessDeniedException

问题描述

1
Exception in thread "main" java.nio.file.AccessDeniedException: /usr/local/elkApp/elasticsearch/config/jvm.options

问题解决

1
$ chown -R elk:elk /usr/local/elkApp && chown -R elk:elk /usr/local/elkData

Kibaka配置发生变化

1
2
3
4
# 在早些版本中,kibana的配置
elasticsearch.url: "http://192.168.100.102:9200"
# 在7.2.0中,kibana的配置
elasticsearch.hosts: ["http://192.168.100.102:9200"]

如果配置错误,会提示:

1
Error: [elasticsearch.url]: definition for this key is missing

附录

ELK下载

https://www.elastic.co/cn/downloads/

Elasticsearch配置项说明

配置项 说明
bootstrap.memory_lock memory_lock设置为true,避免jvm、系统、硬盘进行内存交换,这对节点的健康非常重要。
cluster.name 集群名
node.name 节点名
path.data 数据保存目录
path.logs 日志保存目录
network.host 节点host/ip
http.port HTTP访问端口
transport.tcp.port TCP传输端口
node.master 是否允许作为主节点
node.data 是否数据节点
discovery.zen.ping.unicast.hosts 集群中的主节点的初始列表,当节点(主节点或者数据节点)启动时使用这个列表进行探测
discovery.zen.minimum_master_nodes 主节点个数

Logstash配置项说明

配置项 说明
data_type => "list" 数据类型为list
key => "logstash" 缓存key为:logstash
codec => "json" 数据格式为:json

Kibana配置项说明

配置项 说明
server.port 端口
server.host 主机
elasticsearch.url Elasticsearch的客户端节点地址或主节点地址

调试Elasticsearch

1
2
3
$ sudo journalctl -f
$ sudo journalctl --unit elasticsearch
$ sudo journalctl --unit elasticsearch --since "2016-10-30 18:17:16"
坚持原创技术分享,您的支持将鼓励我继续创作!
0%