【HDFS】hive任务报HDFS异常:last block does not have enough number of replicas

digest [复制链接]
发表于 : 2016-1-20 10:09:15 最新回复:2019-05-31 11:08:24
14427 3
DDR  精英

【问题现象】



TSP客户端在2015-05-27 19:00.811时出现close ERROR,日志如下:

2015-05-27 19:00.811 [pool-2-thread-3] ERROR: /tsp/nedata/collect/UGW/ugwufdr/20150527/10/6_20150527105000_20150527105500_SR5S14_1432723806338_128_11.pkg.tmp1432723806338 close hdfs sequence file fail (SequenceFileInfoChannel.java:444)

java.io.IOException: Unable to close file because the last block does not have enough number of replicas.

         at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2160)

         at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2128)

         at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:70)

         at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:103)

         at com.huawei.pai.collect2.stream.SequenceFileInfoChannel.close(SequenceFileInfoChannel.java:433)

         at com.huawei.pai.collect2.stream.SequenceFileWriterToolChannel$FileCloseTask.call(SequenceFileWriterToolChannel.java:804)

         at com.huawei.pai.collect2.stream.SequenceFileWriterToolChannel$FileCloseTask.call(SequenceFileWriterToolChannel.java:792)

         at java.util.concurrent.FutureTask.run(FutureTask.java:262)

         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

         at java.lang.Thread.run(Thread.java:745)





【问题进展】



1.       TSP HDFS客户端是在2015-05-27 19:00,232开始写/tsp/nedata/collect/UGW/ugwufdr/20150527/10/6_20150527105000_20150527105500_SR5S14_1432723806338_128_11.pkg.tmp1432723806338的。其中分配的块是blk_1099105501_25370893:

2015-05-27 19:00,232 | INFO  | IPC Server handler 30 on 25000 | BLOCK* allocateBlock: /tsp/nedata/collect/UGW/ugwufdr/20150527/10/6_20150527105000_20150527105500_SR5S14_1432723806338_128_11.pkg.tmp1432723806338. BP-1803470917-172.29.57.33-1428597734132 blk_1099105501_25370893{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b2d7b7d0-f410-4958-8eba-6deecbca2f87:NORMAL|RBW], ReplicaUnderConstruction[[DISK]DS-76bd80e7-ad58-49c6-bf2c-03f91caf750f:NORMAL|RBW]]} | org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveAllocatedBlock(FSNamesystem.java:3166)



2.       写完之后TSP HDFS客户端调用了fync:

2015-05-27 19:00,717 | INFO  | IPC Server handler 22 on 25000 | BLOCK* fsync: /tsp/nedata/collect/UGW/ugwufdr/20150527/10/6_20150527105000_20150527105500_SR5S14_1432723806338_128_11.pkg.tmp1432723806338 for DFSClient_NONMAPREDUCE_-120525246_15 | org.apache.hadoop.hdfs.server.namenode.FSNamesystem.fsync(FSNamesystem.java:3805)



3.       然后TSP HDFS客户端调用close关闭文件,NameNode收到客户端的close请求之后就会检查最后一个块的完成状态,只有当有足够的DataNode 上报了块完成才可用关闭文件,检查块完成的状态是通过checkFileProgress函数检查的,打印如下:

2015-05-27 19:00,603 | INFO  | IPC Server handler 44 on 25000 | BLOCK* checkFileProgress: blk_1099105501_25370893{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-ef5fd3c9-5088-4813-ae9a-34a0714ec3a3:NORMAL|RBW], ReplicaUnderConstruction[[DISK]DS-f863e30f-ce5b-48cc-9cca-72f64c558adc:NORMAL|RBW]]} has not reached minimal replication 1 | org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkFileProgress(FSNamesystem.java:3197)

2015-05-27 19:00,005 | INFO  | IPC Server handler 45 on 25000 | BLOCK* checkFileProgress: blk_1099105501_25370893{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-ef5fd3c9-5088-4813-ae9a-34a0714ec3a3:NORMAL|RBW], ReplicaUnderConstruction[[DISK]DS-f863e30f-ce5b-48cc-9cca-72f64c558adc:NORMAL|RBW]]} has not reached minimal replication 1 | org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkFileProgress(FSNamesystem.java:3197)

2015-05-27 19:00,806 | INFO  | IPC Server handler 63 on 25000 | BLOCK* checkFileProgress: blk_1099105501_25370893{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-ef5fd3c9-5088-4813-ae9a-34a0714ec3a3:NORMAL|RBW], ReplicaUnderConstruction[[DISK]DS-f863e30f-ce5b-48cc-9cca-72f64c558adc:NORMAL|RBW]]} has not reached minimal replication 1 | org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkFileProgress(FSNamesystem.java:3197)

2015-05-27 19:00,408 | INFO  | IPC Server handler 43 on 25000 | BLOCK* checkFileProgress: blk_1099105501_25370893{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-ef5fd3c9-5088-4813-ae9a-34a0714ec3a3:NORMAL|RBW], ReplicaUnderConstruction[[DISK]DS-f863e30f-ce5b-48cc-9cca-72f64c558adc:NORMAL|RBW]]} has not reached minimal replication 1 | org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkFileProgress(FSNamesystem.java:3197)

2015-05-27 19:00,610 | INFO  | IPC Server handler 37 on 25000 | BLOCK* checkFileProgress: blk_1099105501_25370893{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-ef5fd3c9-5088-4813-ae9a-34a0714ec3a3:NORMAL|RBW], ReplicaUnderConstruction[[DISK]DS-f863e30f-ce5b-48cc-9cca-72f64c558adc:NORMAL|RBW]]} has not reached minimal replication 1 | org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkFileProgress(FSNamesystem.java:3197)

2015-05-27 19:00,011 | INFO  | IPC Server handler 37 on 25000 | BLOCK* checkFileProgress: blk_1099105501_25370893{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-ef5fd3c9-5088-4813-ae9a-34a0714ec3a3:NORMAL|RBW], ReplicaUnderConstruction[[DISK]DS-f863e30f-ce5b-48cc-9cca-72f64c558adc:NORMAL|RBW]]} has not reached minimal replication 1 | org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkFileProgress(FSNamesystem.java:3197)



4.       NameNode打印了多次checkFileProgress是由于TSP HDFS客户端多次尝试close文件,但是由于当前状态不满足要求,导致close失败,TSP HDFS客户端retry的次数是由参数dfs.client.block.write.locateFollowingBlock.retries决定的,

该参数默认是5,所以在NameNode的日志中看到了6次checkFileProgress打印。



5.       但是再过0.5之后,DataNode就上报块已经成功写入了:

2015-05-27 19:00,608 | INFO  | IPC Server handler 60 on 25000 | BLOCK* addStoredBlock: blockMap updated: 192.168.10.21:25009 is added to blk_1099105501_25370893{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-ef5fd3c9-5088-4813-ae9a-34a0714ec3a3:NORMAL|RBW], ReplicaUnderConstruction[[DISK]DS-f863e30f-ce5b-48cc-9cca-72f64c558adc:NORMAL|RBW]]} size 11837530 | org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.logAddStoredBlock(BlockManager.java:2393)

2015-05-27 19:00,297 | INFO  | IPC Server handler 37 on 25000 | BLOCK* addStoredBlock: blockMap updated: 192.168.10.10:25009 is added to blk_1099105501_25370893 size 11837530 | org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.logAddStoredBlock(BlockManager.java:2393)



6.       DataNode上报块写成功通知延迟的原因可能有:网络瓶颈导致、CPU瓶颈导致。



7.       如果此时再次调用close或者close的retry的次数增多,那么close都将返回成功。建议适当增大参数dfs.client.block.write.locateFollowingBlock.retries的,默认值为5次,尝试的时间间隔为400ms、800ms、1600ms、3200ms、6400ms,12800ms,

那么close函数最多需要25.2秒才能返回。





【问题解决办法】

可以通过调整参数dfs.client.block.write.locateFollowingBlock.retries的值来增加retry的次数,可以将值设置为6,那么中间睡眠等待的时间为400ms、800ms、1600ms、3200ms、6400ms、12800ms,也就是说close函数最多要50.8秒才能返回。

中软不建议修改该参数:

但是该dfs.client.block.write.locateFollowingBlock.retries 在开源配置中不开放,调整参数也只能规避问题,若CPU负荷很大的情况,依然会存在该问题。

建议降低任务并发量或者控制cpu使用率来减轻网络的传输,使得DN能顺利向NN汇报block情况。

问题结论:

减轻系统负载。集群发生的时候负载很重,CPU的32个核(100%)全部分配跑MR认为了,至少要留20%的CPU

  • x
  • 常规:

点评 回复

跳转到指定楼层
建赟  专家 发表于 2016-5-21 20:46:33 已赞(0) 赞(0)

感谢分享!

  • x
  • 常规:

点评 回复

IT管理员巴拉巴拉  导师 发表于 2016-1-20 10:24:45 已赞(0) 赞(0)

感谢分享!

  • x
  • 常规:

点评 回复

hadoop2019   发表于 2019-5-31 11:08:24 已赞(0) 赞(0)

您好,请问 ,该参数如何调整,我检查了hdfs-site.xml和core文件,未发现该参数,可以讲下如何调整吗
  • x
  • 常规:

点评 回复

发表回复
您需要登录后才可以回帖 登录 | 注册

内容安全提示:尊敬的用户您好,为了保障您、社区及第三方的合法权益,请勿发布可能给各方带来法律风险的内容,包括但不限于政治敏感内容,涉黄赌毒内容,泄露、侵犯他人商业秘密的内容,侵犯他人商标、版本、专利等知识产权的内容,侵犯个人隐私的内容等。也请勿向他人共享您的账号及密码,通过您的账号执行的所有操作,将视同您本人的行为,由您本人承担操作后果。详情请参看“隐私声明
如果附件按钮无法使用,请将Adobe Flash Player 更新到最新版本!

登录参与交流分享

登录
快速回复 返回顶部