15.1 实验环境介绍
- 内容概述
- 环境准备
- 非Kerberos及Kerberos环境连接示例
- 测试环境
- Kerberos集群CDH5.11.2,OS为Redhat7.2
- 非Kerberos集群CDH5.13,OS为CentOS6.5
- Windows + Intellij
- 前置条件
- CDH集群运行正常
- 本地开发环境与集群网络互通且端口放通
15.2 环境准备
- Maven依赖
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.6.0-cdh5.11.2</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.6.0-cdh5.11.2</version>
</dependency>
- 创建访问集群的Keytab文件(非Kerberos集群可跳过此步)
[ec2-user@ip-172-31-22-86 keytab]$ sudo kadmin.local
Authenticating as principal mapred/admin@CLOUDERA.COM with password.
kadmin.local: listprincs fayson*
fayson@CLOUDERA.COM
kadmin.local: xst -norandkey -k fayson.keytab fayson@CLOUDERA.COM
...
kadmin.local: exit
[ec2-user@ip-172-31-22-86 keytab]$ ll
total 4
-rw------- 1 root root 514 Nov 28 10:54 fayson.keytab
[ec2-user@ip-172-31-22-86 keytab]$
- 获取集群krb5.conf文件,内容如下
- 非Kerberos集群可跳过此步
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
dns_lookup_realm = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
default_realm = CLOUDERA.COM
#default_ccache_name = KEYRING:persistent:%{uid}
[realms]
CLOUDERA.COM = {
kdc = ip-172-31-22-86.ap-southeast-1.compute.internal
admin_server = ip-172-31-22-86.ap-southeast-1.compute.internal
}
- 配置hosts文件
172.31.22.86 ip-172-31-22-86.ap-southeast-1.compute.internal
172.31.26.102 ip-172-31-26-102.ap-southeast-1.compute.internal
172.31.21.45 ip-172-31-21-45.ap-southeast-1.compute.internal
172.31.26.80 ip-172-31-26-80.ap-southeast-1.compute.internal
-
通过CM下载Yarn客户端配置
-
工程目录结构
15.3 Kerberos和非Kerberos的公共类
- WordCountMapper类
public class WordCountMapper extends Mapper<LongWritable, Text, Text, LongWritable> {
@Override
protected void map(LongWritable key, Text value,Context context)
throws IOException, InterruptedException {
//获取到一行文件的内容
String line = value.toString();
//切分这一行的内容为一个单词数组
String[] words = StringUtils.split(line, " ");
//遍历 输出 <word,1>
for(String word:words){
context.write(new Text(word), new LongWritable(1));
}
}
}
- WordCountReducer类
public class WordCountReducer extends Reducer<Text, LongWritable, Text, LongWritable> {
@Override
protected void reduce(Text key, Iterable<LongWritable> values,Context context)
throws IOException, InterruptedException {
long count = 0;
for(LongWritable value:values){
//调用value的get()方法将long值取出来
count += value.get();
}
//输出<单词:count>键值对
context.write(key, new LongWritable(count));
}
}
- InitMapReduceJob类
public class InitMapReduceJob {
public static Job initWordCountJob(Configuration conf) {
Job wcjob = null;
try {
conf.setBoolean("mapreduce.app-submission.cross-platform", true); //设置跨平台提交作业
//设置job所使用的jar包,使用Configuration对象调用set()方法,设置mapreduce.job.jar wcount.jar
conf.set("mapred.jar", "C:\\Users\\Administrator\\IdeaProjects\\hbasedevelop\\target\\hbase-develop-1.0-SNAPSHOT.jar");
//创建job对象需要conf对象,conf对象包含的信息是:所用的jar包
wcjob = Job.getInstance(conf);
wcjob.setMapperClass(WordCountMapper.class);
wcjob.setReducerClass(WordCountReducer.class);
//wcjob的mapper类输出的kv数据类型
wcjob.setMapOutputKeyClass(Text.class);
wcjob.setMapOutputValueClass(LongWritable.class);
//wcjob的reducer类输出的kv数据类型
//job对象调用setOutputKey
wcjob.setOutputKeyClass(Text.class);
wcjob.setOutputValueClass(LongWritable.class);
FileInputFormat.setInputPaths(wcjob, "/fayson");
FileOutputFormat.setOutputPath(wcjob, new Path("/wc/output"));
} catch (Exception e) {
e.printStackTrace();
}
return wcjob;
}
}
- ConfigurationUtil类
public class ConfigurationUtil {
/**
* 获取Hadoop配置信息
* @param confPath
* @return
*/
public static Configuration getConfiguration(String confPath) {
Configuration configuration = new YarnConfiguration();
configuration.addResource(new Path(confPath + File.separator + "core-site.xml"));
configuration.addResource(new Path(confPath + File.separator + "hdfs-site.xml"));
configuration.addResource(new Path(confPath + File.separator + "mapred-site.xml"));
configuration.addResource(new Path(confPath + File.separator + "yarn-site.xml"));
configuration.setBoolean("dfs.support.append", true);
configuration.set("fs.hdfs.impl", "org.apache.hadoop.hdfs.DistributedFileSystem");
configuration.setBoolean("fs.hdfs.impl.disable.cache", true);
return configuration;
}
}
15.4 非Kerberos环境
- Intellij运行示例代码
public class NodeKBMRTest {
private static String confPath = System.getProperty("user.dir") + File.separator + "nonekb-conf";
public static void main(String[] args) {
try {
Configuration conf = ConfigurationUtil.getConfiguration(confPath);
Job wcjob = InitMapReduceJob.initWordCountJob(conf);
wcjob.setJarByClass(NodeKBMRTest.class);
wcjob.setJobName("NodeKBMRTest");
//调用job对象的waitForCompletion()方法,提交作业。
boolean res = wcjob.waitForCompletion(true);
System.exit(res ? 0 : 1);
} catch (Exception e) {
e.printStackTrace();
}
}
}
- 直接在Intellij运行提交MR作业到Hadoop集群,运行成功后,查看HDFS输出结果
15.5 Kerberos环境
- Intellij运行示例代码
public class KBMRTest {
private static String confPath = System.getProperty("user.dir") + File.separator + "conf";
public static void main(String[] args) {
try {
System.setProperty("java.security.krb5.conf", "/Volumes/Transcend/keytab/krb5.conf");
System.setProperty("javax.security.auth.useSubjectCredsOnly", "false");
System.setProperty("sun.security.krb5.debug", "true"); //Kerberos Debug模式
Configuration conf = ConfigurationUtil.getConfiguration(confPath);
//登录Kerberos账号
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab("fayson@CLOUDERA.COM", "/Volumes/Transcend/keytab/fayson.keytab");
UserGroupInformation userGroupInformation = UserGroupInformation.getCurrentUser();
Job wcjob = InitMapReduceJob.initWordCountJob(conf);
wcjob.setJarByClass(KBMRTest.class);
wcjob.setJobName("KBMRTest");
//调用job对象的waitForCompletion()方法,提交作业。
boolean res = wcjob.waitForCompletion(true);
System.exit(res ? 0 : 1);
} catch (Exception e) {
e.printStackTrace();
}
}
}
- 直接在Intellij运行代码,代码自动推送jar到集群执行,Yarn作业界面运行成功后,查看HDFS创建的目录及文件
- 注意:在提交作业时,如果代码修改需要重新编译打包,并将jar放到黄底标注的目录。
大数据视频推荐:
腾讯课堂
CSDN
大数据语音推荐:
企业级大数据技术应用
大数据机器学习案例之推荐系统
自然语言处理
大数据基础
人工智能:深度学习入门到精通