使用exitCode退出的mapreduce作业:-1000在src文件系统上更改了资源

问题描述 投票:0回答:1
    Application application_1552978163044_0016 failed 5 times due to AM Container for appattempt_1552978163044_0016_000005 exited with exitCode: -1000

诊断:

java.io.IOException:资源abfs://[email protected]/hdp/apps/2.6.5.3006-29/mapreduce/mapreduce.tar.gz在src文件系统上更改(预计1552949440000,为1552978240000)未能通过此尝试。未通过申请​​。

azure-storage yarn hadoop2 hdinsight
1个回答
0
投票

仅基于异常信息,它似乎是由Azure存储无法保留复制文件的原始时间戳引起的。我搜索了一个建议更改yarn-common源代码的解决方法,以便在复制文件时禁用时间戳检查的代码块,以避免异常抛出使MR作业继续工作。

这是最新版本的source code中的yarn-common,它检查复制文件的时间戳并抛出异常。

/** #L255
   * Localize files.
   * @param destination destination directory
   * @throws IOException cannot read or write file
   * @throws YarnException subcommand returned an error
   */
  private void verifyAndCopy(Path destination)
      throws IOException, YarnException {
    final Path sCopy;
    try {
      sCopy = resource.getResource().toPath();
    } catch (URISyntaxException e) {
      throw new IOException("Invalid resource", e);
    }
    FileSystem sourceFs = sCopy.getFileSystem(conf);
    FileStatus sStat = sourceFs.getFileStatus(sCopy);
    if (sStat.getModificationTime() != resource.getTimestamp()) {
      throw new IOException("Resource " + sCopy +
          " changed on src filesystem (expected " + resource.getTimestamp() +
          ", was " + sStat.getModificationTime());
    }
    if (resource.getVisibility() == LocalResourceVisibility.PUBLIC) {
      if (!isPublic(sourceFs, sCopy, sStat, statCache)) {
        throw new IOException("Resource " + sCopy +
            " is not publicly accessible and as such cannot be part of the" +
            " public cache.");
      }
    }

    downloadAndUnpack(sCopy, destination);
  }
© www.soinside.com 2019 - 2024. All rights reserved.