部署方案
、 | NN-1 | NN-2 | DN | ZK | ZKFC | JNN | RS | NM |
---|---|---|---|---|---|---|---|---|
node01 | * | * | * | |||||
node02 | * | * | * | * | * | * | ||
node03 | * | * | * | * | ||||
node04 | * | * | * | * |
环境准备
- 先安装一台虚拟机
- 配置静态ip
见centos7 配置静态ip - 修改主机名
见linux修改 hostname - 安装jdk
见centos7安装java - 关闭防火墙
见centos7关闭防火墙 - 下载hadoop 安装包
我们这里使用的是hadoop-3.2.1.tar.gz
放到/tmp目录 - 解压到/usr/local/hadoop/目录下
mkdir /usr/local/hadoop
tar -zxvf /tmp/hadoop-3.2.1.tar.gz -C /usr/local/hadoop
- 克隆虚拟机
先保存当前虚拟机的快照
然后右击该虚拟机,点击管理-->克隆
选择现有快照,点击下一步
选择创建完整克隆,点击下一步
虚拟机名称,输入node02,点击完成。
开启此虚拟机,修改ip和hostname
按照上述步骤,创建node03和node04 - 配置node01的hosts,添加node2,node03,node04
vi /etc/hosts
添加
192.168.88.201 node01
192.168.88.202 node02
192.168.88.203 node03
192.168.88.204 node04
并将hosts发送给其他三台机器
scp /etc/hosts node02:/etc/hosts
scp /etc/hosts node03:/etc/hosts
scp /etc/hosts node04:/etc/hosts
输入用户名和密码 完成
- 配置免密登录
在所有4台虚拟机上执行以下操作
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys
将node01的id_rsa.pub发送给其他三台机器
cd /root/.ssh
scp id_rsa.pub node02:`pwd`/node01.pub
scp id_rsa.pub node03:`pwd`/node01.pub
scp id_rsa.pub node04:`pwd`/node01.pub
在其他3太机器上执行
cd /root/.ssh
cat node01.pub >> authorized_keys
在node01上试一下是否可以免密登录到其他几台机器上
ssh node02
exit
ssh node03
exit
ssh node04
exit
可以即ok
配置node02免密登录到其他三台机器,操作同上
开始部署
- 在node01机器上,配置hdfs-site.xml
cd /usr/local/hadoop/hadoop-3.2.1/etc/hadoop
vi hdfs-site.xml
加入以下配置
<configuration>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>node01:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>node02:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>node01:9870</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>node02:9870</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://node01:8485;node02:8485;node03:8485/mycluster</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_dsa</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/var/path/to/journal/node/local/data</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
</configuration>
- 配置core-site.xml
vi core-site.xml
加入以下配置
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/var/hadoop/ha</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>node02:2181,node03:2181,node04:2181</value>
</property>
</configuration>
- 编辑hadoop-env.sh中的JAVA_HOME
vi hadoop-env.sh
找到JAVA_HOME,并编辑
export JAVA_HOME=/usr/local/java/jdk1.8.0_251
- 编辑workes
vi workers
添加以下参数
node02
node03
node04
- 分发 hdfs-site.xml 和core-site.xml,hadoop-env.sh,workers 到node02,node03,node04
scp core-site.xml hdfs-site.xml hadoop-env.sh workers node02:`pwd`
scp core-site.xml hdfs-site.xml hadoop-env.sh workers node03:`pwd`
scp core-site.xml hdfs-site.xml hadoop-env.sh workers node04:`pwd`
- 在node01上
- 修改sbin目录下的几个脚本,确保通过root用户可以把hadoop启起来
cd /usr/local/hadoop/hadoop-3.2.1/sbin
编辑start-dfs.sh和stop-dfs.sh文件,添加下列参数:
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
HDFS_JOURNALNODE_USER=root
HDFS_ZKFC_USER=root
编辑start-yarn.sh和stop-yarn.sh文件,添加下列参数:
YARN_RESOURCEMANAGER_USER=root
HDFS_DATANODE_SECURE_USER=yarn
YARN_NODEMANAGER_USER=root
把这几个文件分发到其他节点
scp start-dfs.sh stop-dfs.sh start-yarn.sh stop-yarn.sh node02:`pwd`
scp start-dfs.sh stop-dfs.sh start-yarn.sh stop-yarn.sh node03:`pwd`
scp start-dfs.sh stop-dfs.sh start-yarn.sh stop-yarn.sh node04:`pwd`
- 搭建zookeeper
- 下载apache-zookeeper-3.6.1-bin.tar.gz,并放到node02的/tmp目录下
- 解压到/usr/local/zookeeper目录
mkdir /usr/local/zookeeper
tar -zxvf /tmp/apache-zookeeper-3.6.1-bin.tar.gz -C /usr/local/zookeeper
- 修改配置文件
cd /usr/local/zookeeper/apache-zookeeper-3.6.1-bin/conf
cp zoo_sample.cfg zoo.cfg
vi zoo.cfg
添加以下配置
#找到dataDir,并改成以下值
dataDir=/var/zookeeper
#添加以下配置
server.1=node02:2888:3888
server.2=node03:2888:3888
server.3=node04:2888:3888
- 将zookeeper 分发到node03,node04
cd /usr/local
scp -r zookeeper/ node03:`pwd`
scp -r zookeeper/ node04:`pwd`
- 在node02,node03,node04上创建对应的目录
mkdir /var/zookeeper
- 在node02,node03,node04的/var/zookeeper创建myid文件
node02echo 1 > /var/zookeeper/myid
node03echo 2 > /var/zookeeper/myid
node04echo 3 > /var/zookeeper/myid
- 启动zookeeper集群
node02,node03,node04分别执行
cd /usr/local/zookeeper/apache-zookeeper-3.6.1-bin/
bin/zkServer.sh start
可以执行以下命令看一下状态
bin/zkServer.sh status
- 启动journalnode
在node01,node02,node03上分别执行
cd /usr/local/hadoop/hadoop-3.2.1/
bin/hdfs --daemon start journalnode
然后jps
看一下,JournalNode是否起来了
- 格式化文件系统
在node01上执行
cd /usr/local/hadoop/hadoop-3.2.1/
bin/hdfs namenode -format
- node01启动namenode
cd /usr/local/hadoop/hadoop-3.2.1/
sbin/hadoop-daemon.sh start namenode
- node02执行bootstrapStandby
cd /usr/local/hadoop/hadoop-3.2.1/
bin/hdfs namenode -bootstrapStandby
- 在zookeeper上初始化Ha状态
在node01上执行
cd /usr/local/hadoop/hadoop-3.2.1/
bin/hdfs zkfc -formatZK
- 启动当前hadoop集群
cd /usr/local/hadoop/hadoop-3.2.1/
sbin/start-dfs.sh
- jsp看一下,进程是否正确
[root@node01 hadoop-3.2.1]# jps
15816 NameNode
16521 DFSZKFailoverController
16572 Jps
15615 JournalNode
[root@node02 hadoop-3.2.1]# jps
13063 Jps
13033 DFSZKFailoverController
1818 QuorumPeerMain
12763 NameNode
12875 DataNode
12606 JournalNode
[root@node03 hadoop-3.2.1]# jps
10513 Jps
10306 JournalNode
10402 DataNode
1734 QuorumPeerMain
[root@node04 var]# jps
2272 Jps
2209 DataNode
1764 QuorumPeerMain
- 浏览器访问
http://node01:9870
- 关闭hadoop集群和zookeeper集群
在node01上
cd /usr/local/hadoop/hadoop-3.2.1/
sbin/stop-dfs.sh
在node02,node03,node04上
cd /usr/local/zookeeper/apache-zookeeper-3.6.1-bin/
bin/zkServer.sh stop
- 开启zookeeper集群和hadoop集群
在node02,node03,node04上
cd /usr/local/zookeeper/apache-zookeeper-3.6.1-bin/
bin/zkServer.sh start
在node01上
cd /usr/local/hadoop/hadoop-3.2.1/
sbin/start-dfs.sh
配置高可用资源管理
在node01节点上
- 配置mapred-site.xml
cd /usr/local/hadoop/hadoop-3.2.1/etc/hadoop/
vi mapred-site.xml
配置如下
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.application.classpath</name>
<value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value>
</property>
</configuration>
- 配置yarn-site.xml
vi yarn-site.xml
配置如下
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>cluster1</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>node03</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>node04</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm1</name>
<value>node03:8088</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm2</name>
<value>node04:8088</value>
</property>
<property>
<name>hadoop.zk.address</name>
<value>node02:2181,node03:2181,node04:2181</value>
</property>
</configuration>
- 分发配置
scp mapred-site.xml yarn-site.xml node02:`pwd`
scp mapred-site.xml yarn-site.xml node03:`pwd`
scp mapred-site.xml yarn-site.xml node04:`pwd`
- 配置node03,node04之间的免密登录
node03节点上
cd /root/.ssh
scp id_rsa.pub node04:`pwd`/node03.pub
node04节点上
cd /root/.ssh
cat node03.pub >> authorized_keys
scp id_rsa.pub node03:`pwd`/node04.pub
node03节点上
cat node04.pub >> authorized_keys
ssh node04
ssh node03
exit
exit
可以互相免密登录即可
- 按照启动顺序进行启动
- 在node02,node03,node04上,先启动zookeeper
cd /usr/local/zookeeper/apache-zookeeper-3.6.1-bin/
bin/zkServer.sh start
- 在node01上, 启动hdfs
cd /usr/local/hadoop/hadoop-3.2.1/
sbin/start-dfs.sh
- 在node01上启动资源管理
cd /usr/local/hadoop/hadoop-3.2.1/
sbin/start-yarn.sh
- 浏览器访问一下http://node03:8088/
出现Nodes of the cluster的页面即可 - 关闭hadoop集群和zookeeper集群
在node01上
cd /usr/local/hadoop/hadoop-3.2.1/
sbin/stop-all.sh
在node02,node03,node04上
cd /usr/local/zookeeper/apache-zookeeper-3.6.1-bin/
bin/zkServer.sh stop