centos7中elk安装教程

本文主要是介绍在centos7里如何安装elk,即如何安装elasticsearch 、Logstash和 Kibana

Posted by yishuifengxiao on 2019-09-19

一 安装 elasticsearch

1.1 安装步骤

elasticsearch 需要依赖 java 环境,因此在安装 elasticsearch 之前,需要先配置好 java 环境。

1.1.1 下载 es 安装包

下载地址

1.1.2 安装

下载完成后,将安装包复制到安装目录下,然后解压,解压命令如下:

1
2
$ tar xf elasticsearch-6.5.4.tar.gz
$ cd elasticsearch-6.5.4
1.1.3 修改配置文件

elasticsearch 默认只能本机访问,要想能在其他网络中访问,需要对conf文件夹下的elasticsearch.yml文件进行配置。主要是以下两行:

1
2
3
4
5
6
7
8
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
1.1.4 启动 elasticsearch

切换到非 root 用户,在bin目录下执行以下命令

1
./elasticsearch

启动成功后,可以访问一下连接

http://ip:9200

得到类似一下的结果

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
"name": "eBeH7rf",
"cluster_name": "elasticsearch",
"cluster_uuid": "UDEDuGDHQvq50GT7JCaLeQ",
"version": {
"number": "6.5.4",
"build_flavor": "default",
"build_type": "tar",
"build_hash": "d2ef93d",
"build_date": "2018-12-17T21:17:40.758843Z",
"build_snapshot": false,
"lucene_version": "7.5.0",
"minimum_wire_compatibility_version": "5.6.0",
"minimum_index_compatibility_version": "5.0.0"
},
"tagline": "You Know, for Search"
}

1.2 安装中遇到的问题

  1. max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
1
max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

原因:无法创建本地文件问题,用户最大可创建文件数太小

解决方案:切到 root 用户下

1
2
3
4
5
$ vim /etc/security/limits.conf 在文件的末尾添加下面的参数值:
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096

前面的*符号必须带上,然后重新启动系统就可以了。执行完成后可以使用命令 ulimit -n 查看进程数

  1. max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
1
2
3
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
... ...

原因:最大虚拟内存太小,需要修改系统变量的最大值。

解决方案:切换到 root 用户,修改配置 sysctl.conf 增加配置值: vm.max_map_count=262144

1
2
$ vim /etc/sysctl.conf
vm.max_map_count=262144

执行命令

1
sysctl -p
  1. max number of threads [1024] for user [es] likely too low, increase to at least [2048]

原因:无法创建本地线程问题,用户最大可创建线程数太小
解决方案:切换到 root 用户,进入 limits.d 目录下,修改 90-nproc.conf 配置文件。

1
2
3
4
5
vi /etc/security/limits.d/90-nproc.conf
找到如下内容:
* soft nproc 1024
#修改为
* soft nproc 2048
  1. system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk

原因:Centos6 不支持 SecComp,而 ES6.4.2 默认bootstrap.system_call_filter为 true 进行检测,所以导致检测失败,失败后直接导致 ES 不能启动。
详见 :https://github.com/elastic/elasticsearch/issues/22899

解决方法:在 elasticsearch.yml 中新增配置bootstrap.system_call_filter,设为 false,注意要在 Memory 下面:

1
2
bootstrap.memory_lock: false
bootstrap.system_call_filter: false

然后重新启动 ES 服务 就可以了。

  1. 启动权限

elasticsearch 不能以 root 权限启动,需要用非 root 账号启动。在非 root 用户启动时,会报错权限不足的问题,解决办法如下:

以 root 权限在 elasticsearch 的根目录下面新建文件夹 datalogs,并赋予对应的权限

1.3 安装 elasticsearch-head 插件

安装 elasticsearch-head 插件插件之前,需要先安装好 node.js 环境,安装教程参见 菜鸟教程-Node.js 安装配置

安装好 node 环境以后,需要先到 githup 上下载 elasticsearch-head 插件的安装包
elasticsearch-head 插件的源码在 github 上(https://github.com/mobz/elasticsearch-head/),可以通过 git 命令拉取下来,也可以直接通过以下地址下载。

下载地址如下:
https://github.com/mobz/elasticsearch-head/archive/master.zip

下载完成后,可以执行以下命令运行 elasticsearch-head 插件

1
2
3
4
unzip elasticsearch-head-master.zip
cd elasticsearch-head-master
npm install
npm run start

运行成功以后,即可通过 http://ip:9100 访问了。

注意

在默认情况下,使用 elasticsearch-head 插件连接时 elasticsearch 时可能无法成功连接,需要在 elasticsearch 的 elasticsearch.yml 配置文件里加上以下配置:

1
2
http.cors.enabled: true
http.cors.allow-origin: "*"

上述命令表示允许 elasticsearch 跨域连接,配置完成以后,重启 elasticsearch 服务即可。

1.4 分词安装

ik 包的下载地址如下:

https://github.com/medcl/elasticsearch-analysis-ik/releases

下载好对应版本的安装包

在 elasticsearch 的plugins文件夹目录下新建一个名为 ik的文件夹,
然后将下载好的安装包复制到此路径下

接下来对安装包进行解压,解压命令如下:

1
unzip elasticsearch-analysis-ik-6.5.4.zip

1.5 elasticsearch 简单使用

1.5.1 创建一个索引
1
curl -XPUT http://http://192.168.19.128:9200/index
1.5.2 创建一个映射
1
2
3
4
5
6
7
8
9
10
11
curl -XPOST http://192.168.19.128:9200/index/fulltext/_mapping -H 'Content-Type:application/json' -d'
{
"properties": {
"content": {
"type": "text",
"analyzer": "ik_max_word",
"search_analyzer": "ik_max_word"
}
}

}'

注意,在执行此命令时,可能会响应失败,其响应结果如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
{
"error": {
"root_cause": [
{
"type": "cluster_block_exception",
"reason": "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"
}
],
"type": "cluster_block_exception",
"reason": "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"
},
"status": 403
}

解决办法如下:

1
2
curl -XPUT -H "Content-Type: application/json" http://192.168.19.128:9200/_all/_settings
-d '{"index.blocks.read_only_allow_delete": null}'

_all 可以改为自己的索引名称,也可以直接执行

1
2
curl -XPUT -H "Content-Type: application/json" http://192.168.19.128:9200/index_settings
-d '{"index.blocks.read_only_allow_delete": null}'

执行该命令后,得到的响应结果如下

1
2
3
{
"acknowledged": true
}

接下来,再次执行创建映射命令,得到的结果如下

1
2
3
{
"acknowledged": true
}

表明执行成功

1.5.3 索引一些文档
1
2
3
curl -XPOST http://192.168.19.128:9200/index/fulltext/1 -H 'Content-Type:application/json' -d'
{"content":"沈阳首次公布记载“平顶山惨案”经过的历史档案"}
'
1
2
3
curl -XPOST http://192.168.19.128:9200/index/fulltext/2 -H 'Content-Type:application/json' -d'
{"content":"沙特公布石油设施遇袭调查结果"}
'
1
2
3
curl -XPOST http://192.168.19.128:9200/index/fulltext/3 -H 'Content-Type:application/json' -d'
{"content":"华为5G合同第一!中国5G或再迎一个爆发时刻"}
'
1
2
3
curl -XPOST http://192.168.19.128:9200/index/fulltext/4 -H 'Content-Type:application/json' -d'
{"content":"外交部:中方在南海万安滩有关作业无可非议"}
'
1
2
3
curl -XPOST http://192.168.19.128:9200/index/fulltext/5 -H 'Content-Type:application/json' -d'
{"content":"发改委:猪肉价格趋于稳定 涨幅较8月明显收窄"}
'
1.5.4 查询显示
1
2
3
4
5
6
7
8
9
10
11
curl -XPOST http://192.168.19.128:9200/index/fulltext/_search?pretty  -H 'Content-Type:application/json' -d'
{
"query" : { "match" : { "content" : "华为" }},
"highlight" : {
"pre_tags" : ["<tag1>", "<tag2>"],
"post_tags" : ["</tag1>", "</tag2>"],
"fields" : {
"content" : {}
}
}
}

得到的响应结果如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
{
"took": 5,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 0.6047856,
"hits": [
{
"_index": "index",
"_type": "fulltext",
"_id": "4",
"_score": 0.6047856,
"_source": {
"content": "华为5G合同第一!中国5G或再迎一个爆发时刻"
},
"highlight": {
"content": [
"<tag1>华为</tag1>5G合同第一!中国5G或再迎一个爆发时刻"
]
}
}
]
}
}

二 安装 Logstash

Logstash 同样需要 java 环境,安装 Logstash 之前需要先搭建好 java 环境

  1. 下载 Logstash,下载地址如下https://www.elastic.co/cn/downloads/logstash
  2. 将下载好的安装包复制到安装路径,然后解压,解压命令如下:
1
tar -xvzf  logstash-7.3.2.tar.gz
  1. 修改配置
1
2
cd config
cp logstash-sample.conf logstash.conf

然后在logstash.conf文件里加入以下内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
input {
beats {
port => 5044
}
}

output {
elasticsearch {
hosts => ["http://192.168.19.128:9200"] # 在ES上操作index
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
#user => "elastic"
#password => "changeme"
}
}

然后修改 logstash.yml文件

加入以下配置:

1
2
# http.host: "127.0.0.1"
http.host: "192.168.19.128"
  1. 启动程序
1
./logstash -f ../config/logstash.conf

得到的日志如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to /usr/local/logstash/logstash-7.3.2/logs which is now configured via log4j2.properties
[2019-09-19T10:57:40,859][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/local/logstash/logstash-7.3.2/data/queue"}
[2019-09-19T10:57:40,886][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/local/logstash/logstash-7.3.2/data/dead_letter_queue"}
[2019-09-19T10:57:41,456][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-09-19T10:57:41,470][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.3.2"}
[2019-09-19T10:57:41,501][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"71e401de-dfc5-4fff-afea-8648463a7d19", :path=>"/usr/local/logstash/logstash-7.3.2/data/uuid"}
[2019-09-19T10:57:43,474][INFO ][org.reflections.Reflections] Reflections took 64 ms to scan 1 urls, producing 19 keys and 39 values
[2019-09-19T10:57:44,792][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2019-09-19T10:57:45,067][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-09-19T10:57:45,138][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-09-19T10:57:45,145][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2019-09-19T10:57:45,197][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2019-09-19T10:57:45,341][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2019-09-19T10:57:45,383][INFO ][logstash.outputs.elasticsearch] Index Lifecycle Management is set to 'auto', but will be disabled - Your Elasticsearch cluster is before 7.0.0, which is the minimum version required to automatically run Index Lifecycle Management
[2019-09-19T10:57:45,385][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2019-09-19T10:57:45,400][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2019-09-19T10:57:45,435][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, :thread=>"#<Thread:0x3904e1c8 run>"}
[2019-09-19T10:57:45,447][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[2019-09-19T10:57:46,159][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2019-09-19T10:57:46,264][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>"main"}
[2019-09-19T10:57:46,771][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2019-09-19T10:57:46,818][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-09-19T10:57:47,976][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

可以看到 日志显示 ES 了连接地址,并且在监听 5044,9600

三 安装 Kibana

先安装 java 环境

  1. 先下载 Kibana,下载地址为https://www.elastic.co/cn/downloads/kibana
  2. 将下载好的安装包复制到安装路径,然后解压
1
tar -xvzf  kibana-7.3.2-linux-x86_64.tar.gz
  1. 修改配置文件
    修改config目录下的配置文件 kibana.yml

增加以下两行配置

1
2
server.host: "192.168.19.128"
elasticsearch.hosts: ["http://192.168.19.128:9200"]
  1. 启动程序
1
2
3
4
5
6
[root@localhost bin]# ./kibana
Kibana should not be run as root. Use --allow-root to continue.
[root@localhost bin]# ./kibana --allow-root
log [03:13:43.713] [info][plugins-system] Setting up [1] plugins: [translations]
log [03:13:43.720] [info][plugins][translations] Setting up plugin
log [03:13:43.721] [info][plugins-system] Starting [1] plugins: [translations]

注意 :默认情况下 kibana 不能以 root 用户启动

启动成功后,即可通过 http://ip:5601访问了。

如果访问时出现

1
Kibana server is not ready yet

此问题可能是 kibana 与 elasticsearch 版本不一致造成的,需要保持版本一致。








参考链接

ELK 下载地址

kibana 下载地址

Logstash 下载地址

Linux 安装 Elasticsearch6.4.x 详细步骤以及问题解决方案

Linux ELK 安装(服务器架设篇)

Linux 日志分析 ELK 环境搭建