Hadoop 2.6.5 安装文档
一 平台
OS: Ubuntu 16.04.3 LTS
JDK: openjdk version "1.8.0_151"
Hadoop: hadoop2.6.5
二 安装前准备
2.1 修改主机名
Master节点,编辑/etc/hostname
master
Slave 节点,编辑/etc/hostname
slave
2.2 修改hosts文件
Master 节点,编辑/etc/hosts
10.190.3.10 master
10.190.3.6 slave
slave 节点,编辑/etc/hosts
10.190.3.10 master
10.190.3.6 slaves
2.3 关闭防火墙
Master节点和Slave节点
sudo ufw disable
2.4 创建hadoop用户
master 节点
sudo useradd -d /home/hadoop -m hadoop
sudo passwd hadoop
slave 节点
sudo useradd -d /home/hadoop -m hadoop
sudo passwd hadoop
2.5 配置免密登录
master 和slave节点可以以hadoop用户免密码登录.
master 节点,以hadoop 用户执行下列指令
ssh-keygen -t rsa
ssh-copy-id hadoop@slave
验证:
ssh slave
不需要输入密码即可以登录slave机器.
slave 节点,以hadoop 用户执行下列指令
ssh-keygen -t rsa
ssh-copy-id hadoop@master
验证:
ssh master
不需要输入密码即可以登录master机器.
2.6 安装Java
sudo apt install openjdk-8-jdk
三 安装Hadoop
3.1 解压
tar -zvxf hadoop-2.6.5.tar.gz
mv hadoop-2.6.5 ~/cloud/
ln -s /home/hadoop/cloud/hadoop-2.6.5 /home/hadoop/cloud/hadoop
3.2 配置环境变量
编辑/etc/profile
export JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64/"
# set hadoop environment
export HADOOP_HOME=/home/hadoop/cloud/hadoop
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export CLASSPATH=.:$JAVA_HOME/lib:$HADOOP_HOME/lib:$CLASSPATH
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
使环境变量立即生效注意在哪个用户下执行该命令,环境变量在那个用户下生效
# su hadoop
$ source /etc/profile
四 配置hadoop文件
4.1 core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop/cloud/hadoop/hadoop_tmp</value>
<!--需要自己创建hadoop_tmp文件夹-->
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:9000/hbase</value>
</property>
</configuration>
4.2 hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hadoop/cloud/hadoop/dfs/name</value>
<description>namenode上存储hdfs元数据</description>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hadoop/cloud/hadoop/dfs/data</value>
<description>datanode上数据块物理存储位置</description>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
4.3 mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
<property>
<name>mapreduce.jobtracker.http.address</name>
<value>NameNode:50030</value>
</property>
</configuration>
jobhistory是Hadoop自带一个历史服务器,记录Mapreduce历史作业。默认情况下,jobhistory没有启动,可用以下命令启动:
$ sbin/mr-jobhistory-daemon.sh start historyserver
4.4 yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>master:2181,slave1L2181,slave2:2181</value>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
</configuration>
4.5 slaves
slave
4.6 hadoop-env.sh
vim /home/hadoop/cloud/hadoop/etc/hadoop/hadoop-env.sh
export JAVA_HOME=${JAVA_HOME} -> export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/
export HADOOP_COMMON_LIB_NATIVE_DIR=/home/hadoop/hadoop/lib/native
4.7 复制到slave节点
最后,将整个/home/hadoop/cloud/hadoop-2.6.5文件夹及其子文件夹使用scp复制到Slave相同目录中
scp -r /home/hadoop/cloud/hadoop-2.6.5 hadoop@slave:/home/hadoop/cloud/
五 运行hadoop
5.1 格式化
确保配置文件中各文件夹已经创建
hdfs namenode –format
5.2 启动hadoop
start-dfs.sh
start-yarn.sh
5.3 jps查看进程
master节点进程
8193 Jps
7943 ResourceManager
7624 NameNode
7802 SecondaryNameNode
slave节点进程
1413 DataNode
1512 NodeManager
1626 Jps
5.4 通过浏览器查看集群运行状态
概览: http://10.190.3.10:50070
集群: http://10.190.3.10:8088
JobHistory: http://10.190.3.10:19888
jobhistory是Hadoop自带一个历史服务器,记录Mapreduce历史作业。默认情况下,jobhistory没有启动,可用以下命令启动:
sbin/mr-jobhistory-daemon.sh start historyserver
六 遇到的问题
6.1 运行JPS没有NameNode进程
1、先运行stop-all.sh
2、格式化namdenode,不过在这之前要先删除原目录,即core-site.xml下配置的<name>hadoop.tmp.dir</name>所指向的目录,删除后切记要重新建立配置的空目录,然后运行hadoop namenode -format
3、运行start-all.sh
参考:https://segmentfault.com/a/1190000011514144#articleHeader15