Hadoop 启动 ResourceManager 和 NodeManager 时出错

问题描述 投票:0回答:4

我正在尝试使用单节点集群(伪分布式)设置 Hadoop3-alpha3 并使用 apache 指南 来执行此操作。我尝试运行示例 MapReduce 作业,但每次连接都被拒绝。运行

sbin/start-all.sh
后,我在 ResourceManager 日志中看到了这些异常(在 NodeManager 日志中也类似):

xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.commons.beanutils.FluentPropertyBeanIntrospector: Error when creating PropertyDescriptor for public final void org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)! Ignoring this property.
xxxx-xx-xx xx:xx:xx,xxx DEBUG org.apache.commons.beanutils.FluentPropertyBeanIntrospector: Exception is:
java.beans.IntrospectionException: bad write method arg count: public final void org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)
    at java.desktop/java.beans.PropertyDescriptor.findPropertyType(PropertyDescriptor.java:696)
    at java.desktop/java.beans.PropertyDescriptor.setWriteMethod(PropertyDescriptor.java:356)
    at java.desktop/java.beans.PropertyDescriptor.<init>(PropertyDescriptor.java:142)
    at org.apache.commons.beanutils.FluentPropertyBeanIntrospector.createFluentPropertyDescritor(FluentPropertyBeanIntrospector.java:178)
    at org.apache.commons.beanutils.FluentPropertyBeanIntrospector.introspect(FluentPropertyBeanIntrospector.java:141)
    at org.apache.commons.beanutils.PropertyUtilsBean.fetchIntrospectionData(PropertyUtilsBean.java:2245)
    at org.apache.commons.beanutils.PropertyUtilsBean.getIntrospectionData(PropertyUtilsBean.java:2226)
    at org.apache.commons.beanutils.PropertyUtilsBean.getPropertyDescriptor(PropertyUtilsBean.java:954)
    at org.apache.commons.beanutils.PropertyUtilsBean.isWriteable(PropertyUtilsBean.java:1478)
    at org.apache.commons.configuration2.beanutils.BeanHelper.isPropertyWriteable(BeanHelper.java:521)
    at org.apache.commons.configuration2.beanutils.BeanHelper.initProperty(BeanHelper.java:357)
    at org.apache.commons.configuration2.beanutils.BeanHelper.initBeanProperties(BeanHelper.java:273)
    at org.apache.commons.configuration2.beanutils.BeanHelper.initBean(BeanHelper.java:192)
    at org.apache.commons.configuration2.beanutils.BeanHelper$BeanCreationContextImpl.initBean(BeanHelper.java:669)
    at org.apache.commons.configuration2.beanutils.DefaultBeanFactory.initBeanInstance(DefaultBeanFactory.java:162)
    at org.apache.commons.configuration2.beanutils.DefaultBeanFactory.createBean(DefaultBeanFactory.java:116)
    at org.apache.commons.configuration2.beanutils.BeanHelper.createBean(BeanHelper.java:459)
    at org.apache.commons.configuration2.beanutils.BeanHelper.createBean(BeanHelper.java:479)
    at org.apache.commons.configuration2.beanutils.BeanHelper.createBean(BeanHelper.java:492)
    at org.apache.commons.configuration2.builder.BasicConfigurationBuilder.createResultInstance(BasicConfigurationBuilder.java:447)
    at org.apache.commons.configuration2.builder.BasicConfigurationBuilder.createResult(BasicConfigurationBuilder.java:417)
    at org.apache.commons.configuration2.builder.BasicConfigurationBuilder.getConfiguration(BasicConfigurationBuilder.java:285)
    at org.apache.hadoop.metrics2.impl.MetricsConfig.loadFirst(MetricsConfig.java:119)
    at org.apache.hadoop.metrics2.impl.MetricsConfig.create(MetricsConfig.java:98)
    at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:478)
    at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:188)
    at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:163)
    at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:62)
    at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:58)
    at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:678)
    at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
    at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1129)
    at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:315)
    at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
    at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1407)

然后在文件中:

xxxx-xx-xx xx:xx:xx,xxx FATAL org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting ResourceManager
java.lang.ExceptionInInitializerError
    at com.google.inject.internal.cglib.reflect.$FastClassEmitter.<init>(FastClassEmitter.java:67)
    at com.google.inject.internal.cglib.reflect.$FastClass$Generator.generateClass(FastClass.java:72)
    at com.google.inject.internal.cglib.core.$DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
    at com.google.inject.internal.cglib.core.$AbstractClassGenerator.create(AbstractClassGenerator.java:216)
    at com.google.inject.internal.cglib.reflect.$FastClass$Generator.create(FastClass.java:64)
    at com.google.inject.internal.BytecodeGen.newFastClass(BytecodeGen.java:204)
    at com.google.inject.internal.ProviderMethod$FastClassProviderMethod.<init>(ProviderMethod.java:256)
    at com.google.inject.internal.ProviderMethod.create(ProviderMethod.java:71)
    at com.google.inject.internal.ProviderMethodsModule.createProviderMethod(ProviderMethodsModule.java:275)
    at com.google.inject.internal.ProviderMethodsModule.getProviderMethods(ProviderMethodsModule.java:144)
    at com.google.inject.internal.ProviderMethodsModule.configure(ProviderMethodsModule.java:123)
    at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
    at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:349)
    at com.google.inject.AbstractModule.install(AbstractModule.java:122)
    at com.google.inject.servlet.ServletModule.configure(ServletModule.java:52)
    at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
    at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
    at com.google.inject.spi.Elements.getElements(Elements.java:110)
    at com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138)
    at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104)
    at com.google.inject.Guice.createInjector(Guice.java:96)
    at com.google.inject.Guice.createInjector(Guice.java:73)
    at com.google.inject.Guice.createInjector(Guice.java:62)
    at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:332)
    at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:377)
    at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:1116)
    at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1218)
    at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
    at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1408)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make protected final java.lang.Class java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain) throws java.lang.ClassFormatError accessible: module java.base does not "opens java.lang" to unnamed module @173f73e7
    at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:337)
    at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:281)
    at java.base/java.lang.reflect.Method.checkCanSetAccessible(Method.java:197)
    at java.base/java.lang.reflect.Method.setAccessible(Method.java:191)
    at com.google.inject.internal.cglib.core.$ReflectUtils$2.run(ReflectUtils.java:56)
    at java.base/java.security.AccessController.doPrivileged(Native Method)
    at com.google.inject.internal.cglib.core.$ReflectUtils.<clinit>(ReflectUtils.java:46)
    ... 29 more

供参考我的 core-site.xml:

<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

hdfs-site.xml:

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

mapred-site.xml:

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

和yarn-site.xml:

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.env-whitelist</name>
        <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
    </property>
</configuration>

我不知道是什么导致了这些异常,任何帮助都会有所帮助。

编辑:添加了hadoop-env.sh:

export JAVA_HOME=/usr/local/jdk-9
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}
case ${HADOOP_OS_TYPE} in
  Darwin*)
    export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.realm= "
    export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.kdc= "
    export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.conf= "
  ;;
esac
export HADOOP_ROOT_LOGGER=DEBUG,console
export HADOOP_DAEMON_ROOT_LOGGER=DEBUG,RFA
java hadoop resourcemanager hadoop3
4个回答
11
投票

@tk421 在评论中提到。 Java 9 尚不兼容 Hadoop 3(可能还有任何 hadoop 版本)。

https://issues.apache.org/jira/browse/HADOOP-11123

我已更改为 Java 8.181,两者现在都在启动:

hadoop@hadoop:/usr/local/hadoop$ sbin/start-all.sh
WARNING: Attempting to start all Apache Hadoop daemons as hadoop in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [hadoop]
Starting resourcemanager
Starting nodemanagers
hadoop@hadoop:/usr/local/hadoop$ jps
8756 SecondaryNameNode
8389 NameNode
9173 NodeManager
9030 ResourceManager
8535 DataNode
9515 Jps

1
投票

我的问题是我用java11来配合hadoop。

所以我所做的是

1.rm /库/Java/*

2.从https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html下载java8

3.安装java8jdk和

4.修复hadoop-env.sh中的JAVA_HOME

5.stop-all.sh

6.start-dfs.sh

7.start-yarn.sh


0
投票

我想分享我的观察。 OpenJDK 17 用于与 Hadoop 交互。

我认为Java版本越新越好。我错了,我不得不切换到 OpenJDK 8。 那么,我们该如何解决这个问题呢?

  1. 您需要卸载以前版本的Java。有几个删除选项
  • 仅删除 OpenJDK

    $ sudo apt-get remove openjdk*

  • 删除 OpenJDK 及其依赖项

    $ sudo apt-get remove --auto-remove openjdk*

  • 删除OpenJDK及其配置文件

    $ sudo apt-get purge openjdk*

  • 删除 OpenJDK 及其依赖项及其配置文件

    $ sudo apt-get purge --auto-remove openjdk*

对于我来说,我使用了后者。

  1. 您需要安装OpenJDK 8。
  • 安装OpenJDK
    $ sudo apt install openjdk-8-jdk -y

安装完成后,您可以查看Java版本

$ java -version; javac -version

  1. 您需要编辑
    JAVA_HOME
    变量的路径。为此,您需要打开
    hadoop-env.sh
  • 要打开文件hadoop-env.sh,您可以使用以下命令:

    sudo nano $HADOOP_HOME/etc/hadoop/hadoop-env.sh

其中 $HADOOP_HOME 是 Hadoop 的位置(例如,

/home/hdoop/hadoop-3.2.4
)。

  • JAVA_HOME
    变量的内容看起来像这样:export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
    。当然,这
    全部取决于您的Java位置
    转到目录
  1. hadoop-3.2.4/sbin
接下来,您需要

停止集群所有节点上的守护进程:./stop-all.sh


  1. 运行NameNode和DataNode:

    ./start-dfs.sh

    
    

  2. 运行 YARN 资源和 NodeManager:

    ./start-yarn.sh

    
    

  3. 检查所有守护进程是否处于活动状态并作为 Java 进程运行:

    jps

    结果列表应如下所示(大约)

    33706 次要名称节点

    33330 名称节点

    34049 节点管理器

    33900 资源管理器

    33482 数据节点

    34410 日元

  4. Hadoop 设置完成!

附注我希望我的回答有用。我试图在回答中涵盖所有细节。祝你一切顺利。


0
投票
是的,正如其他人指出的那样,java版本可能是问题所在

正在关注
此页面,他们清楚地链接到此处以获取兼容的java版本 所以,我尝试使用 java 17 并且 start-yarn.sh 失败了,但是使用 java 11,start-yarn.sh 工作了

© www.soinside.com 2019 - 2024. All rights reserved.