对应ip:
es1(最开始命名为master, 后来死活有问题就改了的,network.host不能用master命名,不知道为啥) -> 172.18.0.2
slave1 -> 172.18.0.3
slave2 -> 172.18.0.4
第一次运行时, master节点要最开始运行.
es不允许root直接启动,需要别的账户权限启动
各个节点之间要ssh免密访问.
1.下载jdk1.8_11,要放到/usr/local/或者/usr/share/下, 7.0.1自带了jdk, 这里不用,用我们自己配置的
2.到https://www.elastic.co/cn/downloads/ 下载 以7.0.1版本为例子,5,6,7之间版本配置是有差别的
3.elasticsearch也要放到/usr/local/
scp /usr/local/jdk1.8.0_11 slave1:/usr/local/jdk1.8.0_11 scp /usr/local/jdk1.8.0_11 slave2:/usr/local/jdk1.8.0_11
2.添加系统变量,路径,图下显示
为了防止es用户权限找不到$JAVA_HOME/bin下的java, 需要在下面两个位置添加java路径变量识别的设置:
cd /usr/local/elasticsearch/elasticsearch-7.0.1/bin; vi elasticsearch;
export JAVA_HOME=/usr/local/jdk1.8.0_11 export PATH=$JAVA_HOME/bin:$PATH source "`dirname "$0"`"/elasticsearch-env ES_JVM_OPTIONS="$ES_PATH_CONF"/jvm.options JVM_OPTIONS=`"$JAVA" -cp "$ES_CLASSPATH" org.elasticsearch.tools.launchers.JvmOptionsParser "$ES_JVM_OPTIONS"` ES_JAVA_OPTS="${JVM_OPTIONS//\$\{ES_TMPDIR\}/$ES_TMPDIR} $ES_JAVA_OPTS" if [ -x "$JAVA_HOME/bin/java" ]; then JAVA="/usr/local/jdk1.8.0_11/bin/java" else JAVA=`which java` fi
3.因为es好内存比较大, vi /etc/sysctl.conf 添加:
vm.max_map_count = 262144
scp /etc/sysctl.conf slave1:/etc/sysctl.conf scp /etc/sysctl.conf slave2:/etc/sysctl.conf
4.添加用户和设置权限,
mkdir data logs;
sudo useradd es;
sudo chown -R es.es /usr/local/elasticsearch/
cd /usr/local/elasticsearch/;
这些基本的配置都设置后ssh cp到slave1和slave2节点上:
scp elasticsearch slave1:/usr/local/elasticsearch/elasticsearch-7.0.1/bin scp elasticsearch slave2:/usr/local/elasticsearch/elasticsearch-7.0.1/bin
6.接下来就是具体的配置了
cd /usr/local/elasticsearch/elasticsearch-7.0.1/;
vi config/elasticsearch.yml
elasticsearch.keystore是启动的时候自动生成的, 如果重装需要的时候可以删除的
下面是三个节点的配置:
1.master节点上,注意下面的network.host 一开始是用的master名称,死活有问题,就直接改成ip了了.
cluster.name: es6.2 node.name: master node.master: true node.data: true path.data: /usr/local/elasticsearch/elasticsearch-7.0.1/data path.logs: /usr/local/elasticsearch/elasticsearch-7.0.1/logs bootstrap.memory_lock: false network.host: 172.18.0.2 http.port: 9200 http.cors.enabled: true http.cors.allow-origin: "*" transport.tcp.port: 9300 discovery.zen.ping.unicast.hosts: ["172.18.0.2:9300", "slave1:9300", "slave2:9300"] discovery.zen.minimum_master_nodes: 1 cluster.initial_master_nodes: ["master", "slave1", "slave2"] #7.0.1新加的 #logger.org.elasticsearch.cluster.coordination.ClusterBootstrapService: TRACE #logger.org.elasticsearch.discovery: TRACE
2.slave1节点:
cluster.name: es6.2 node.name: slave1 node.master: true node.data: true path.data: /usr/local/elasticsearch/elasticsearch-7.0.1/data path.logs: /usr/local/elasticsearch/elasticsearch-7.0.1/logs bootstrap.memory_lock: false network.host: slave1 http.port: 9200 http.cors.enabled: true http.cors.allow-origin: "*" transport.tcp.port: 9300 discovery.zen.ping.unicast.hosts: ["172.18.0.2:9300", "slave1:9300", "slave2:9300"] discovery.zen.minimum_master_nodes: 1 cluster.initial_master_nodes: ["master", "slave1", "slave2"]
3.slave2节点:
cluster.name: es6.2 node.name: slave2 node.master: true node.data: true path.data: /usr/local/elasticsearch/elasticsearch-7.0.1/data path.logs: /usr/local/elasticsearch/elasticsearch-7.0.1/logs bootstrap.memory_lock: false network.host: slave2 http.port: 9200 http.cors.enabled: true http.cors.allow-origin: "*" transport.tcp.port: 9300 discovery.zen.ping.unicast.hosts: ["172.18.0.2:9300", "slave1:9300", "slave2:9300"] discovery.zen.minimum_master_nodes: 1 cluster.initial_master_nodes: ["master", "slave1", "slave2"]
注意上面的区别:
注意:master, slave1,slave2的环境都需要是一样的.
4. 第一次启动的时候需要删除 data logs 目录下的文件
cd /usr/local/elasticsearch/elasticsearch-7.0.1;
rm -rf data/* logs/* config/elasticsearch.keystore;
5.首先启动master节点, 注意切换用户es来启动, 首次启动调试可以先不加-d到后台,可以先用&:
master:
cd /usr/local/elasticsearch/elasticsearch-7.0.1/; su es; ./bin/elasticsearch &;
再开一个终端:
ssh slave1;
cd /usr/local/elasticsearch/elasticsearch-7.0.1/; su es; ./bin/elasticsearch -d;
ssh slave2;
cd /usr/local/elasticsearch/elasticsearch-7.0.1/; su es; ./bin/elasticsearch -d;
同样的在slave1和slave2节点上去启动,启动的时候会慢一点,等半分钟,观察master终端的变化:
节点会显示被加到了集群名称es6.2中
6.测试:
488 curl "http://localhost:9200/?pretty" 489 curl "http://127.0.0.1:9200/_cat/health?v" 490 ssh slave2 491 curl http://master:9200/_cat/allocation?v 492 curl "http://master:9200/_cat/allocation?v" 493 curl http://127.0.0.1:9200/_cat/ 494 curl http://localhost:9200/_cat/ 495 curl "http://localhost:9200/_cat/" 496 curl "http://slave1:9200/_cat/" 497 curl "http://slave2:9200/_cat/" 498 curl "http://master:9200/_cat/" 499 curl "http://company:9200/_cat/nodes"
显示:
在配置完elasticsearch,启动程序会包如下错误:
1
2
3
4
5
6
7
8
9
|
[elk@localhost bin]$ . /elasticsearch ... ... ERROR: [3] bootstrap checks failed [1]: max file descriptors [65535] for elasticsearch process is too low, increase to at least [65536] [2]: max number of threads [1024] for user [elk] is too low, increase to at least [2048]<br>[3]: max virtual memory areas vm.max_map_count [256000] is too low, increase to at least [262144] [4]: system call filters failed to install ; check the logs and fix your configuration or disable system call filters at your own risk ... ... |
1、针对错误[1]、[2],可以采取如下方式:
修改/etc/security/limits.conf配置文件:
1
2
3
4
5
6
7
|
[root@yqtrack-elk03 /] # vim /etc/security/limits.conf 添加如下配置项: * - nproc 65535 * - nofile 409600 elastic - memlock unlimited |
修改/etc/security/limits.d/90-nproc.conf配置文件:
1
2
3
4
5
6
|
[root@yqtrack-elk03 /] # vim /etc/security/limits.d/90-nproc.conf 修改如下配置项目: * soft nproc unlimited root soft nproc unlimited |
修改完成后,重新登录elk账户,查看设置是否生效。
1
2
3
4
|
[elk@yqtrack-elk03 /]$ ulimit -n 409600 [elk@yqtrack-elk03 /]$ ulimit -u 65535 |
编辑 /etc/security/limits.conf,追加以下内容;
* soft nofile 65536
* hard nofile 65536
此文件修改后需要重新登录用户,才会生效
ssh下需要重新登录模式,不能su elk,,如果没有给设置密码需要设置密码然后ssh登录
2、针对错误[3],可以采取如下方式:
修改/etc/sysctl.conf文件配置项:
1
2
3
4
5
6
7
8
9
10
11
12
13
|
[root@localhost /] # vim /etc/sysctl.conf # 一个进程可以拥有的VMA(虚拟内存区域)的数量: vm.max_map_count=262144 # 调用虚拟内存的阈值数: vm.swappiness=1 #禁用IPv6 net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 [root@localhost /] # sysctl -p |
3、针对错误[4],可以采取如下方式:
出现错误的原因:是因为centos6.x操作系统不支持SecComp,而elasticsearch 5.5.2默认bootstrap.system_call_filter为true进行检测,所以导致检测失败,失败后直接导致ES不能启动。
在elasticsearch.yml中添加配置项:bootstrap.system_call_filter为false:
1
2
3
|
# ----------------------------------- Memory ----------------------------------- bootstrap.memory_lock: false bootstrap.system_call_filter: false |
重新启动程序,问题顺利解决!
转载请注明:SuperIT » elasticsearch集群安装步骤的各个关键点,坑