NoClassDefFoundError:org/apache/hadoop/yarn/util/Clock

问题描述 投票:0回答:1

运行 WordCount 命令时出现一些错误:

2023-10-06 15:55:35,005 INFO mapreduce.Job: Job job_1696606856991_0001 running in uber mode: false 2023-10-06 15:55:35,006 INFO mapreduce.Job: map 0% reduce 0%
2023-10-06 15:55:35,027 INFO mapreduce.Job: Job job_1696606856991_0001 failed with state FAILED due to: Application application_1696606856991_0001 failed 2 times due to AM Container for appattempt_169 6606856991_0001_000002 exited with exitCode: 1
Failing this attempt.Diagnostics: [2023-10-06 15:55:34.304] Exception from container-launch. Container id: container_1696606856991_0001_02_000001
Exit code: 1
[2023-10-06 15:55:34.311] Container exited with a non-zero exit code 1. Error file: prelaunch.err. Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Unable to initialize main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/util/Clock
[2023-10-06 15:55:34.311] Container exited with a non-zero exit code 1. Error file: prelaunch.err. Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Unable to initialize main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/util/Clock
For more detailed output, check the application tracking page: http://baoanh-master: 9004/cluster/app /application_1696606856991_0001 Then click on links to logs of each attempt.
. Failing the application.
2023-10-06 15:55:35,052 INFO mapreduce.Job: Counters: 0
input file1.txt:
Hello World
input file2.txt: Hello Hadoop
wordcount output:
cat: output1/part-r-00000': No such file or directory

我已配置mapred和yarn文件如下: 纱线站点.xml

<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARNHOME,HADOOP_MAPRED_HOME</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>baoanh-master:9002</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>baoanh-master:9003</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>baoanh-master:9004</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>baoanh-master:9005</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>baoanh-master:9006</value>
</property>
</configuration>

mapred-site.xml

<configuration>
<property>
<name>mapreduce.application.classpath</name>
<value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value>
</property> 
<property>
<name>mapreduce.jobtracker.address</name>
<value>baoanh-master:9001</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=/home/hadoopbaoanh/hadoop</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=/home/hadoopbaoanh/hadoop</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=/home/hadoopbaoanh/hadoop</value>
</property>
</configuration>


测试.sh

#!/bin/bash
# test the hadoop cluster by running wordcount
# create input files
mkdir input
echo "Hello World" > input/file1.txt
echo "Hello Hadoop" > input/file2.txt
# create input directory on HDFS
hadoop fs -mkdir -p input1
# put input files to HDFS
hdfs dfs -put ./input/* input1
# run wordcount
hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-3.3.6-sources.jar org.apache.hadoop.examples.WordCount input1 output1
# print the input files
echo -e "\ninput file1.txt:"
hdfs dfs -cat input1/file1.txt
echo -e "\ninput file2.txt:"
hdfs dfs -cat input1/file2.txt
# print the output of wordcount
echo -e "\nwordcount output:"
hdfs dfs -cat output1/part-r-00000

我遇到的问题: Error when run WordCount

希望大家能帮助我!!!我英语不好所以非常抱歉!非常感谢!!

hadoop mapreduce word-count
1个回答
0
投票

也许你可以进入${HADOOP_HOME}/bin目录

$ hdfs -fs -ls /input/
看看是否有output1目录?

© www.soinside.com 2019 - 2024. All rights reserved.