无法关闭文件,因为最后一块没有副本的数量不够

问题描述 投票:1回答:1

从错误消息中这是很明显的是,有节省相关的文件中的特定块的副本的问题。其原因可能是,有在访问数据节点保存特定块(块的复制品)的问题。请参阅以下完整的日志:

我发现了另一个用户“huasanyelao” - https://stackoverflow.com/users/987275/huasanyelao也有过类似的异常/问题,但使用情况是不同的。

现在,我们如何解决这些样的问题?据我所知,没有固定的解决方案在所有情况下处理。 1.什么是直接步骤我需要解决这样的错误? 2.如果有针对我当时没有监控记录的工作。做什么样的方法,我需要采取解决这些问题。

P.S:除了固定网络或接入问题,我应该遵循什么其他的方法。

错误日志:

*15/04/10 11:21:13 INFO impl.TimelineClientImpl: Timeline service address: http://your-name-node/ws/v1/timeline/
15/04/10 11:21:14 INFO client.RMProxy: Connecting to ResourceManager at your-name-node/xxx.xx.xxx.xx:0000
15/04/10 11:21:34 WARN hdfs.DFSClient: DataStreamer Exception
java.nio.channels.UnresolvedAddressException
        at sun.nio.ch.Net.checkAddress(Net.java:29)
        at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:512)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
        at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1516)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1318)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1272)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:525)
15/04/10 11:21:40 INFO hdfs.DFSClient: Could not complete /user/xxxxx/.staging/job_11111111111_1212/job.jar retrying...
15/04/10 11:21:46 INFO hdfs.DFSClient: Could not complete /user/xxxxx/.staging/job_11111111111_1212/job.jar retrying...
15/04/10 11:21:59 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/xxxxx/.staging/job_11111111111_1212
Error occured in MapReduce process:
java.io.IOException: Unable to close file because the last block does not have enough number of replicas.
        at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2132)
        at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2100)
        at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:70)
        at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:103)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:54)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:112)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:366)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:338)
        at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1903)
        at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1871)
        at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1836)
        at org.apache.hadoop.mapreduce.JobSubmitter.copyJar(JobSubmitter.java:286)
        at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:254)
        at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:301)
        at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:389)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
        at com.xxx.xxx.xxxx.driver.GenerateMyFormat.runMyExtract(GenerateMyFormat.java:222)
        at com.xxx.xxx.xxxx.driver.GenerateMyFormat.run(GenerateMyFormat.java:101)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at com.xxx.xxx.xxxx.driver.GenerateMyFormat.main(GenerateMyFormat.java:42)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:212)*
hadoop mapreduce block hdfs
1个回答
1
投票

我们有类似的问题。其主要归因于dfs.namenode.handler.count是不够的。增加了可以帮助一些小群却是因为DOS的问题,其中的NameNode无法处理没有。连接或RPC调用,而尚待处理的删除块将增长堆积如山。验证HDFS审计日志和看到任何批量删除或发生其他HDFS行动和匹配,这可能是压倒性NN的作业。回采这些任务将有助于HDFS恢复。

© www.soinside.com 2019 - 2024. All rights reserved.