时间:2017.8.28
开发工程:user-profile/processor/profile
位置:./user-profile/processor/profile/src/main/java/me/cxxxyx/log_process
数据库:HBase - hadoop@hd1
参考代码:UserVisitLogExtraction.java
HBase
hbase shell
常用的Shell命令,参考
创建新的表,action_time
:
- time_uid_action(ID) uid(用户ID) action(活动) time(发生时间) duration(持续时间) extra(额外信息)
list # 查看有哪些表
describe 'problem' # 显示表结构,仅包含列族
创建表
create 'action_time', {NAME => 'info', VERSIONS => 1}
创建成功
hbase(main):004:0> describe 'action_time'
Table action_time is ENABLED
action_time
COLUMN FAMILIES DESCRIPTION
{NAME => 'info', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATIO
N_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL
=> 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY =>
'false', BLOCKCACHE => 'true'}
1 row(s) in 0.0410 seconds
hbase(main):005:0>
添加一条测试数据
put 'action_time','20170823_testUID_login','info:uid','testUID'
put 'action_time','20170823_testUID_login','info:action','login'
put 'action_time','20170823_testUID_login','info:time','20170823'
put 'action_time','20170823_testUID_login','info:duration','20000'
数据
hbase(main):009:0> scan 'action_time'
ROW COLUMN+CELL
20170823_testUID_login column=info:action, timestamp=1503474231369, value=login
20170823_testUID_login column=info:duration, timestamp=1503474241524, value=20000
20170823_testUID_login column=info:time, timestamp=1503474236443, value=20170823
20170823_testUID_login column=info:uid, timestamp=1503474155795, value=testUID
1 row(s) in 0.0270 seconds
hbase(main):010:0>
将表信息与结构添加至项目:synchronous-data/hbase-table-doc 备份
MySQL
显示桥服务器的IP和端口号:
ifconfig
使用Navicat,先登录SSH,再登录线上的MySQL,查询语句,限制20行
select id, user_id from symptomchecker_doctor limit 20
日志
查看用户的日志信息
hadoop fs -ls /logs/django/
hadoop fs -get /logs/django/elapsed_logger.log-20170822.lzma
Log数据的样式
-rw-r--r-- 3 root hadoop 2628559529 2017-08-23 00:23 /logs/django/elapsed_logger.log-20170822.gz
-rw-r--r-- 3 hadoop hadoop 931489800 2017-08-23 04:09 /logs/django/elapsed_logger.log-20170822.lzma
解压文件,时间有点长
unlzma elapsed_logger.log-20170822.lzma
Hadoop可视化网址:http://md3:8888/hbase/
查询数据
head -20000 elapsed_logger.log-20170822 | grep 'daily_request'
数据格式,核心是uid
,请求是daily_request
,用户的核心信息
2017-08-22 00:01:30,018 INFO log_utils.log_elapsed_info Line:134 Time Elapsed: 0.015565s,
Path: /api/daily_request/, Code: 200, Get: [u'phoneType=iPhone7,1', u'push_id=068643ec746684e213c65d2fed18f2f961503d8afa61466c635685033d399f42',
u'vendor=ziyou', u'deviceModel=iPhone', u'app=1', u'client=DoctorClient',
u'platform=iPhone', u'version=4.9.8', u'build=4.9.8', u'systemVer=9.3.3', u'device_id=eefcae739890448a9d17e65d3f9ce47b', 'uid=68674437'],
Post: [], 121.204.121.33, CxxxyxClinic/4.9.8 (iPhone; iOS 9.3.3; Scale/3.00),
view_name: api.views.daily_request,
MR作业
类:UserActionTimeExtraction
将日志的信息,从Log中导入至ActionTime表
测试正则表达式,group的数量是groupCount+1,第一维时间,第二维路径,第三维uid,参考
public class Test {
public static void main(String[] args) {
final String REGEX =
"(\\d{4}-\\d{1,2}-\\d{1,2} \\d{2}:\\d{2}:\\d{2}).*Path: (/.*/).*Get: \\[.*'uid=(.*?)'.*].*?";
Pattern mPattern = Pattern.compile(REGEX);
String line = "xxx";
Matcher m = mPattern.matcher(line);
if (m.find()) {
for (int i = 0; i < m.groupCount(); i++) {
System.out.println(m.group(i+1));
}
}
}
}
创建MR作业,循环添加日志信息
public static Job configureJob(Configuration conf, String args[]) throws IOException, ParseException {
//args 20170711 20170712
Job job = new Job(conf, JOB_NAME);
Date startDate = DateUtils.getDate(args[0], TIME_FORMAT); // 开始时间,如20170711
Date endDate = DateUtils.getDate(args[1], TIME_FORMAT); // 结束时间,如20170712
while (startDate.getTime() <= endDate.getTime()) {
// Log信息的地址
String path = String.format(LOG_PATH_FORMAT, DateUtils.getDateStr(startDate, TIME_FORMAT));
System.out.println(path);
FileInputFormat.addInputPath(job, new Path(path));
startDate = addDays(startDate, 1); // 每次增加1天
}
job.setJarByClass(UserActionTimeExtraction.class); // Jar的类
job.setSpeculativeExecution(false);
job.setMapperClass(innerMapper.class); // MapperClass
job.setNumReduceTasks(0);
job.setSpeculativeExecution(false);
job.setInputFormatClass(TextInputFormat.class);
job.setMapOutputKeyClass(NullWritable.class);
job.setMapOutputValueClass(NullWritable.class);
job.setOutputFormatClass(NullOutputFormat.class);
return job;
}
输出
2017-08-23 00:00:00
/robot/p/upload_sleep_raw_data/
123121
执行命令:
mvn clean; mvn package
scp ./target/profile-1.1.1-jar-with-dependencies.jar wangchenlong@bridge.cxxxyx.me:/home/wangchenlong/profile-1.1.1-jar-with-dependencies.jar
scp ./profile-1.1.1-jar-with-dependencies.jar hadoop@hd1:/home/hadoop/wangchenlong/profile-1.1.1-jar-with-dependencies.jar
hadoop jar ./profile-1.1.1-jar-with-dependencies.jar me.cxxxyx.log_process.UserActionTimeExtraction 20170822 20170822 1>0 2>log.txt
查看MR作业的执行情况,网址.
最终作业效果:
2017-08-22_62908362_login,62908362,login,2017-08-22 08:28:27,55435
2017-08-22_119001707_login,119001707,login,2017-08-22 22:21:44,18
2017-08-22_95913164_login,95913164,login,2017-08-22 04:58:00,4567
2017-08-22_96516401_login,96516401,login,2017-08-22 06:41:27,42999
2017-08-22_119001704_login,119001704,login,2017-08-22 22:21:43,0
2017-08-22_119001710_login,119001710,login,2017-08-22 22:21:48,9
2017-08-22_984347_login,984347,login,2017-08-22 22:36:02,5018
2017-08-22_95897600_login,95897600,login,2017-08-22 00:00:06,79387
2017-08-22_74287233_login,74287233,login,2017-08-22 06:43:49,44532
2017-08-22_30766974_login,30766974,login,2017-08-22 08:13:40,56185
查看日志的时间范围:
hadoop fs -ls /logs/django/
/logs/django/elapsed_logger.log-20170101.gz
/logs/django/elapsed_logger.log-20170824.gz
报错,原因是Hadoop的资源被全部占用。
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:323)
显示MR的信息,访问网址,与执行进度。
INFO mapreduce.Job: The url to track the job: http://md3:8088/proxy/application_1489390879204_16905/
INFO mapreduce.Job: Running job: job_1489390879204_16905
INFO mapreduce.Job: Job job_1489390879204_16905 running in uber mode : false
INFO mapreduce.Job: map 0% reduce 0%