Namenode HA(UnknownHostException:nameservice1)

问题描述 投票:7回答:4

我们使用Cloudera Manager启用Namenode High Availability

Cloudera Manager >> HDFS >> Action> Enable High Availability >> Selected Stand By Namenode&Journal Nodes然后nameservice1

完成整个过程后,部署客户端配置。

通过列出HDFS目录(hadoop fs -ls /)从Client Machine进行测试,然后手动故障转移到备用namenode并再次列出HDFS目录(hadoop fs -ls /)。这项测试完美无缺。

但是当我使用以下命令运行hadoop睡眠作业时它失败了

$ hadoop jar /opt/cloudera/parcels/CDH-4.6.0-1.cdh4.6.0.p0.26/lib/hadoop-0.20-mapreduce/hadoop-examples.jar sleep -m 1 -r 0
java.lang.IllegalArgumentException: java.net.UnknownHostException: nameservice1
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:414)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:164)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:129)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:448)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:410)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:128)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2308)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:87)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2342)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2324)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:351)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:103)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:980)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:974)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:974)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:948)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1410)
at org.apache.hadoop.examples.SleepJob.run(SleepJob.java:174)
at org.apache.hadoop.examples.SleepJob.run(SleepJob.java:237)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.SleepJob.main(SleepJob.java:165)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:622)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:622)
at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
Caused by: java.net.UnknownHostException: nameservice1
... 37 more

我不知道为什么即使在部署客户端配置后它也无法解析nameservice1。

当我谷歌这个问题时,我发现这个问题只有一个解决方案

在配置项中添加以下条目以修复问题dfs.nameservices = nameservice1 dfs.ha.namenodes.nameservice1 = namenode1,namenode2 dfs.namenode.rpc-address.nameservice1.namenode1 = ip-10-118-137-215.ec2 .internal:8020 dfs.namenode.rpc-address.nameservice1.namenode2 = ip-10-12-122-210.ec2.internal:8020 dfs.client.failover.proxy.provider.nameservice1 = org.apache.hadoop.hdfs .server.namenode.ha.ConfiguredFailoverProxyProvider

我的印象是Cloudera经理负责。我检查了客户端的配置和配置(/var/run/cloudera-scm-agent/process/1998-deploy-client-config/hadoop-conf/hdfs-site.xml)。

还有一些配置文件的更多细节:

[11:22:37 [email protected]:~]# ls -l /etc/hadoop/conf.cloudera.*
/etc/hadoop/conf.cloudera.hdfs:
total 16
-rw-r--r-- 1 root root  943 Jul 31 09:33 core-site.xml
-rw-r--r-- 1 root root 2546 Jul 31 09:33 hadoop-env.sh
-rw-r--r-- 1 root root 1577 Jul 31 09:33 hdfs-site.xml
-rw-r--r-- 1 root root  314 Jul 31 09:33 log4j.properties

/etc/hadoop/conf.cloudera.hdfs1:
total 20
-rwxr-xr-x 1 root root  233 Sep  5  2013 container-executor.cfg
-rw-r--r-- 1 root root 1890 May 21 15:48 core-site.xml
-rw-r--r-- 1 root root 2546 May 21 15:48 hadoop-env.sh
-rw-r--r-- 1 root root 1577 May 21 15:48 hdfs-site.xml
-rw-r--r-- 1 root root  314 May 21 15:48 log4j.properties

/etc/hadoop/conf.cloudera.mapreduce:
total 20
-rw-r--r-- 1 root root 1032 Jul 31 09:33 core-site.xml
-rw-r--r-- 1 root root 2775 Jul 31 09:33 hadoop-env.sh
-rw-r--r-- 1 root root 1450 Jul 31 09:33 hdfs-site.xml
-rw-r--r-- 1 root root  314 Jul 31 09:33 log4j.properties
-rw-r--r-- 1 root root 2446 Jul 31 09:33 mapred-site.xml

 /etc/hadoop/conf.cloudera.mapreduce1:
total 24
-rwxr-xr-x 1 root root  233 Sep  5  2013 container-executor.cfg
-rw-r--r-- 1 root root 1979 May 16 12:20 core-site.xml
-rw-r--r-- 1 root root 2775 May 16 12:20 hadoop-env.sh
-rw-r--r-- 1 root root 1450 May 16 12:20 hdfs-site.xml
-rw-r--r-- 1 root root  314 May 16 12:20 log4j.properties
-rw-r--r-- 1 root root 2446 May 16 12:20 mapred-site.xml
[11:23:12 [email protected]:~]# 

我怀疑它在/etc/hadoop/conf.cloudera.hdfs1和/etc/hadoop/conf.cloudera.mapreduce1中的旧配置问题,但不确定。

貌似/ etc / hadoop / conf / *从未更新过

# ls -l /etc/hadoop/conf/
total 24
-rwxr-xr-x 1 root root  233 Sep  5  2013 container-executor.cfg
-rw-r--r-- 1 root root 1979 May 16 12:20 core-site.xml
-rw-r--r-- 1 root root 2775 May 16 12:20 hadoop-env.sh
-rw-r--r-- 1 root root 1450 May 16 12:20 hdfs-site.xml
-rw-r--r-- 1 root root  314 May 16 12:20 log4j.properties
-rw-r--r-- 1 root root 2446 May 16 12:20 mapred-site.xml

有人对这个问题有任何想法吗?

hadoop hdfs cloudera cloudera-manager cloudera-cdh
4个回答
7
投票

看起来您在/ etc / hadoop / conf目录中使用了错误的客户端配置。有时,Cloudera Manager(CM)部署客户端配置选项可能无效。

在启用NN HA后,您应该在hadoop客户端配置目录中拥有有效的core-site.xml和hdfs-site.xml文件。要获取有效的站点文件,请从CM转到HDFS服务从“操作”按钮中选择“下载客户端配置”选项。您将获得zip格式的配置文件,解压缩zip文件并用提取的core-site.xml替换/etc/hadoop/conf/core-site.xml和/etc/hadoop/conf/hdfs-site.xml文件, hdfs-site.xml文件。


1
投票

得到了解决。错误的配置链接到“/ etc / hadoop / conf /” - >“/ etc / alternatives / hadoop-conf /” - >“/ etc / hadoop / conf.cloudera.mapreduce1”

它必须是“/ etc / hadoop / conf /” - >“/ etc / alternatives / hadoop-conf /” - >“/ etc / hadoop / conf.cloudera.mapreduce”


0
投票

我的代码中的下面语句通过指定主机和端口解决了问题

val dfs = sqlContext.read.json("hdfs://localhost:9000//user/arvindd/input/employee.json")

-4
投票

我解决了这个问题,我把完整的一行创建了RDD

myfirstrdd = sc.textFile(“hdfs://192.168.35.132:8020 / BUPA.txt”)

然后我能够做其他RDD转换..确保你有文件的w / r / x或者你可以做chmod 777

© www.soinside.com 2019 - 2024. All rights reserved.