1.ERROR org.apache.Hadoop.hdfs.server.datanode.DataNode:Java.io.IOException: Incompatible namespaceIDs in /var/lib/hadoop-0.20/cache/hdfs/dfs/data: namenode
namespaceID = 240012870; datanode namespaceID = 1462711424 .
http://blog.csdn.net/wh62592855/article/details/5752199
2.org.apache.hadoop.security.AccessControlException: Permission denied: user=xxj
hdfs-site.xml文件中加入
dfs.permissions
false
3. Invalid Hadoop Runtime specified; please click 'Configure Hadoop install directory' or fill in library location input
field
eclipse window->preferences - > Map/Reduce 选择hadoop根目录
4.eclipse error: failure to login
eclipse hadoop plugin/lib 目录中加入
lib/hadoop-core.jar,
lib/commons-cli-1.2.jar,
lib/commons-configuration-1.6.jar,
lib/commons-httpclient-3.0.1.jar,
lib/commons-lang-2.4.jar,
lib/jackson-core-asl-1.0.1.jar,
lib/jackson-mapper-asl-1.0.1.jar
修改META-INF/MANIFEST.MF
Bundle-ClassPath: classes/,lib/hadoop-core.jar,lib/commons-cli-1.2.jar,lib/commons-configuration-1.6.jar,lib/commons-httpclient-3.0.1.jar,lib/commons-lang-
2.4.jar,lib/jackson-core-asl-1.0.1.jar,lib/jackson-mapper-asl-1.0.1.jar
5.hadoop 1.0.0版本
hadoop 启动时 TaskTracker无法启动
ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because java.io.IOException: Failed to set permissions of path: \tmp\hadoop-admin
\mapred\local\ttprivate to 0700
at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:682)
at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:655)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:189)
at org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:719)
at org.apache.hadoop.mapred.TaskTracker.(TaskTracker.java:1436)
at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3694)
eclipse运行作业 Failed to set permissions of path: \tmp\hadoop-admin\mapred\staging\Administrator-1506477061\.staging to 0700
:Windows环境下的Hadoop TaskTracker无法正常启动 包括0.20.204、0.20.205、1.0.0版本
网上的解决方案 五花八门 有的说用 0.20.204以下版本 等
我采用修改FileUtil类 checkReturnValue方法代码 重新编译 替换原来的hadoop-core-1.0.0.jar文件 来解决
改后的hadoop-core-1.0.0.jar下载地址http://download.csdn.net/detail/java2000_wl/4326323
bughttps://issues.apache.org/jira/browse/HADOOP-7682
6.Bad connection to FS. command aborted. exception: Call to dp01-154954/192.168.13.134:9000 failed on connection exception: java.NET.ConnectException: Connection refused: no further information
ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory D:\tmp\hadoop-SYSTEM\dfs\name is in an inconsistent
state: storage directory does not exist or is not accessible.
重新格式化 bin/hadoop namenode -format (小心不要拼错)
7.org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-SYSTEM/mapred/system. Name node is in safe mode.
The ratio of reported blocks 0.9412 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1992)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1972)
at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792)
at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
:bin/hadoop dfsadmin -safemode leave (解除安全模式)
safemode参数说明:
enter - 进入安全模式
leave - 强制NameNode离开安全模式
get -返回安全模式是否开启的信息
wait - 等待,一直到安全模式结束。
INFO org.apache.hadoop.hbase.util.FSUtils: Waiting for dfs to exit safe mode...
:bin/hadoop dfsadmin -safemode leave (解除安全模式)
9.win7下 ssh启动不了 错误:ssh: connect to host localhost port 22: Connection refused
输入windows 登录用户名
Unable to load native-hadoop library for your platform... using builtin-java classes where applicable...
原因:
多次格式化hadoop导致版本信息不一致,修改为一致状态即可解决问题
解决方法:
1、停止所有服务 stop-all.sh
2、格式化namenode hadoop namenode -foramt
3、重新启动所有服务 start-all.sh
4、可以进行正常操作了
输入指令bin/hadoop fs -put ~/input /in后,报错:
There are 0 datanode(s) running and no node(s) are excluded in this operation.
这个问题困扰我很长时间,各种百度。最后终于解决。我产生这个问题的原因是:在第一次格式化dfs后,启动并使用了hadoop,后来又重新执行了格式化命令(hdfs namenode -format),这时namenode的clusterID会重新生成,而datanode的clusterID 保持不变。
解决方法:打开hdfs-site.xml里配置的datanode和namenode对应的目录,分别打开current文件夹里的VERSION,可以看到两个VERSION里的clusterID项不一致,修改datanode里VERSION文件的clusterID 与namenode里的一致,再重新启动dfs(执行start-dfs.sh)再执行jps命令可以看到datanode已正常启动。