Почему hdfs выбрасывает исключение LeaseExpiredException в кластере Hadoop (AWS EMR)

Я получаю исключение LeaseExpiredException в кластере hadoop -

tail -f /var/log/hadoop-hdfs/hadoop-hdfs-namenode-ip-172-30-2-148.log

2016-09-21 11: 54: 14,533 INFO BlockStateChange (обработчик 10 IPC-сервера на 8020): BLOCK * InvalidateBlocks: добавьте blk_1073747501_6677 в 172.30.2.189:50010 2016-09-21 11: 54: 14,534 INFO org.apache.hadoop. ipc.Server (обработчик 31 сервера IPC на 8020): обработчик 31 сервера IPC на 8020, вызовите org.apache.hadoop.hdfs.protocol.ClientProtocol.complete из 172.30.2.189:37674 Вызов № 34 Повторите попытку № 0: org.apache. hadoop.hdfs.server.namenode.LeaseExpiredException: нет аренды на /tmp/hive/hadoop/_tez_session_dir/1e4f71f0-9f29-468d-980e-9f19690bf849/.tez/application_1474442135017_0113/application_1474442135017_0114/recovery существует. У держателя DFSClient_NONMAPREDUCE_-143782605_1 нет открытых файлов. 2016-09-21 11: 54: 15,557 INFO org.apache.hadoop.hdfs.StateChange (обработчик сервера IPC 0 на 8020): BLOCK * выделить blk_1073747503_6679 {UCState = UNDER_CONSTRUCTION, truncateBlock = null, primaryNodeIndex = -1, replicas = [ ReplicaUC [[DISK] DS-86592ba7-c51a-431d-8019-9e362d721b28: NORMAL: 172.30.2.189: 50010 | RBW]]} для / var / log / hadoop-yarn / apps / hadoop / logs / application_1474442135017_0114 / ip-172 -30-2-122.us-west-2.compute.internal_8041.tmp

И некоторые из запросов улья также не работают. Я предполагаю, что это из-за вышеуказанной проблемы.

tail -f /var/log/hive/hive-server2.log

2016-09-21T11:59:35,126 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: ql.Driver (Driver.java:execute(1477)) - Executing command(queryId=hive_20160921115934_c56d9c91-640b-4f5d-b490-34549a4258c7): 
INSERT INTO TABLE validation_logs 
SELECT 
"18364", 
"TABLE_VALIDATION",
error.code,
error.validator,
get_json_object(key, '$.table_name'),
NULL,
NULL,
error.failure_msg,
FROM_UNIXTIME(UNIX_TIMESTAMP('20160921','yyyyMMdd')), 
from_unixtime(unix_timestamp())
FROM
(SELECT 
MAP(concat("{\"table_name\" : \"", table_name , "\"}"), error) AS err_map
FROM table_level_validation_result
) AS res
LATERAL VIEW EXPLODE(res.err_map) tmp AS key, error WHERE error IS NOT NULL AND (error.code="error" OR error.code="warn")

2016-09-21T11:59:35,126 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: ql.Driver (SessionState.java:printInfo(1054)) - Query ID = hive_20160921115934_c56d9c91-640b-4f5d-b490-34549a4258c7
2016-09-21T11:59:35,126 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: ql.Driver (SessionState.java:printInfo(1054)) - Total jobs = 1
2016-09-21T11:59:35,127 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: ql.Driver (SessionState.java:printInfo(1054)) - Launching Job 1 out of 1
2016-09-21T11:59:35,127 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: ql.Driver (Driver.java:launchTask(1856)) - Starting task [Stage-1:MAPRED] in serial mode
2016-09-21T11:59:35,127 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: tez.TezSessionPoolManager (TezSessionPoolManager.java:canWorkWithSameSession(404)) - The current user: hadoop, session user: hadoop
2016-09-21T11:59:35,127 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: tez.TezSessionPoolManager (TezSessionPoolManager.java:canWorkWithSameSession(421)) - Current queue name is null incoming queue name is null
2016-09-21T11:59:35,173 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: ql.Context (Context.java:getMRScratchDir(340)) - New scratch dir is hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/65cf7f02-a7d3-40ba-a93f-ff5214afbdfc/hive_2016-09-21_11-59-34_474_5003281239065359634-127
2016-09-21T11:59:35,174 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: exec.Task (TezTask.java:updateSession(279)) - Session is already open
2016-09-21T11:59:35,175 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: tez.DagUtils (DagUtils.java:createLocalResource(758)) - Resource modification time: 1474459142291 for hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/_tez_session_dir/85d36c12-c629-44a8-b23c-c628898a79b7/commons-vfs2-2.0.jar
2016-09-21T11:59:35,176 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: tez.DagUtils (DagUtils.java:createLocalResource(758)) - Resource modification time: 1474459142320 for hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/_tez_session_dir/85d36c12-c629-44a8-b23c-c628898a79b7/emr-ddb-hive.jar
2016-09-21T11:59:35,177 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: tez.DagUtils (DagUtils.java:createLocalResource(758)) - Resource modification time: 1474459142353 for hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/_tez_session_dir/85d36c12-c629-44a8-b23c-c628898a79b7/emr-hive-goodies.jar
2016-09-21T11:59:35,178 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: tez.DagUtils (DagUtils.java:createLocalResource(758)) - Resource modification time: 1474459142389 for hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/_tez_session_dir/85d36c12-c629-44a8-b23c-c628898a79b7/emr-kinesis-hive.jar
2016-09-21T11:59:35,178 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: tez.DagUtils (DagUtils.java:createLocalResource(758)) - Resource modification time: 1474459142423 for hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/_tez_session_dir/85d36c12-c629-44a8-b23c-c628898a79b7/hive-contrib-2.1.0-amzn-0.jar
2016-09-21T11:59:35,179 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: tez.DagUtils (DagUtils.java:createLocalResource(758)) - Resource modification time: 1474459142496 for hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/_tez_session_dir/85d36c12-c629-44a8-b23c-c628898a79b7/hive-plugins-0.0.1-emr-upgrade-20160919.070538-1.jar
2016-09-21T11:59:35,179 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: exec.Task (TezTask.java:build(321)) - Dag name: INSERT INTO TABLE valid...error.code="warn")(Stage-1)
2016-09-21T11:59:35,180 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: ql.Context (Context.java:getMRScratchDir(340)) - New scratch dir is hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/65cf7f02-a7d3-40ba-a93f-ff5214afbdfc/hive_2016-09-21_11-59-34_474_5003281239065359634-127
2016-09-21T11:59:35,223 INFO  [HiveServer2-Background-Pool: Thread-3881([])]: impl.YarnClientImpl (YarnClientImpl.java:submitApplication(273)) - Submitted application application_1474442135017_0147
2016-09-21T11:59:35,224 INFO  [HiveServer2-Background-Pool: Thread-3881([])]: client.TezClient (TezClient.java:start(477)) - The url to track the Tez Session: http://ip-172-30-2-148.us-west-2.compute.internal:20888/proxy/application_1474442135017_0147/
2016-09-21T11:59:35,391 INFO  [HiveServer2-Background-Pool: Thread-3429([])]: SessionState (SessionState.java:printInfo(1054)) - Map 1: 0(+0,-4)/1  
2016-09-21T11:59:35,446 ERROR [HiveServer2-Background-Pool: Thread-3429([])]: SessionState (SessionState.java:printError(1063)) - Status: Failed
2016-09-21T11:59:35,447 ERROR [HiveServer2-Background-Pool: Thread-3429([])]: SessionState (SessionState.java:printError(1063)) - Vertex failed, vertexName=Map 1, vertexId=vertex_1474442135017_0134
set hive.mapjoin.smalltable.filesize = 2000000000
set mapreduce.map.speculative = false
set mapreduce.output.fileoutputformat.compress = true
set hive.exec.compress.output = true
set mapreduce.task.timeout = 6000000
set hive.optimize.bucketmapjoin.sortedmerge = true
set io.compression.codecs = org.apache.hadoop.io.compress.GzipCode
set hive.auto.convert.sortmerge.join.noconditionaltask = false
set hive.optimize.bucketmapjoin = true
set hive.exec.compress.intermediate = true
set hive.enforce.bucketmapjoin = true
set mapred.output.compress = true
set mapreduce.map.output.compress = true
set hive.auto.convert.sortmerge.join = false
set hive.auto.convert.join = false
set mapreduce.reduce.speculative = false
set mapred.output.compression.codec = org.apache.hadoop.io.compress.GzipCodec
set hive.cache.expr.evaluation=false
set mapred.output.compress=true
set hive.exec.compress.output=true
set mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec
set io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec
set hive.exec.compress.intermediate=true
set mapreduce.map.output.compress=true
set hive.auto.convert.join=false
set mapreduce.map.speculative=false
set mapreduce.reduce.speculative=false
00, diagnostics=[Task failed, taskId=task_1474442135017_0134
set hive.mapjoin.smalltable.filesize = 2000000000
set mapreduce.map.speculative = false
set mapreduce.output.fileoutputformat.compress = true
set hive.exec.compress.output = true
set mapreduce.task.timeout = 6000000
set hive.optimize.bucketmapjoin.sortedmerge = true
set io.compression.codecs = org.apache.hadoop.io.compress.GzipCode
set hive.auto.convert.sortmerge.join.noconditionaltask = false
set hive.optimize.bucketmapjoin = true
set hive.exec.compress.intermediate = true
set hive.enforce.bucketmapjoin = true
set mapred.output.compress = true
set mapreduce.map.output.compress = true
set hive.auto.convert.sortmerge.join = false
set hive.auto.convert.join = false
set mapreduce.reduce.speculative = false
set mapred.output.compression.codec = org.apache.hadoop.io.compress.GzipCodec
set hive.cache.expr.evaluation=false
set mapred.output.compress=true
set hive.exec.compress.output=true
set mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec
set io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec
set hive.exec.compress.intermediate=true
set mapreduce.map.output.compress=true
set hive.auto.convert.join=false
set mapreduce.map.speculative=false
set mapreduce.reduce.speculative=false
00_000000, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : attempt_1474442135017_0134
set hive.mapjoin.smalltable.filesize = 2000000000
set mapreduce.map.speculative = false
set mapreduce.output.fileoutputformat.compress = true
set hive.exec.compress.output = true
set mapreduce.task.timeout = 6000000
set hive.optimize.bucketmapjoin.sortedmerge = true
set io.compression.codecs = org.apache.hadoop.io.compress.GzipCode
set hive.auto.convert.sortmerge.join.noconditionaltask = false
set hive.optimize.bucketmapjoin = true
set hive.exec.compress.intermediate = true
set hive.enforce.bucketmapjoin = true
set mapred.output.compress = true
set mapreduce.map.output.compress = true
set hive.auto.convert.sortmerge.join = false
set hive.auto.convert.join = false
set mapreduce.reduce.speculative = false
set mapred.output.compression.codec = org.apache.hadoop.io.compress.GzipCodec
set hive.cache.expr.evaluation=false
set mapred.output.compress=true
set hive.exec.compress.output=true
set mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec
set io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec
set hive.exec.compress.intermediate=true
set mapreduce.map.output.compress=true
set hive.auto.convert.join=false
set mapreduce.map.speculative=false
set mapreduce.reduce.speculative=false
00_000000_0:java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz' at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:198) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:160) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz' at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152) at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:62) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:360) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:172) ... 14 more Caused by: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz' at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97) at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57) at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:299) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203) ... 19 more Caused by: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz' at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.getFileStatus(S3NativeFileSystem.java:818) at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.open(S3NativeFileSystem.java:1193) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:771) at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.open(EmrFileSystem.java:168) at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:109) at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67) at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:297) ... 20 more ], TaskAttempt 1 failed, info=[Error: Error while running task ( failure ) : attempt_1474442135017_0134
set hive.mapjoin.smalltable.filesize = 2000000000
set mapreduce.map.speculative = false
set mapreduce.output.fileoutputformat.compress = true
set hive.exec.compress.output = true
set mapreduce.task.timeout = 6000000
set hive.optimize.bucketmapjoin.sortedmerge = true
set io.compression.codecs = org.apache.hadoop.io.compress.GzipCode
set hive.auto.convert.sortmerge.join.noconditionaltask = false
set hive.optimize.bucketmapjoin = true
set hive.exec.compress.intermediate = true
set hive.enforce.bucketmapjoin = true
set mapred.output.compress = true
set mapreduce.map.output.compress = true
set hive.auto.convert.sortmerge.join = false
set hive.auto.convert.join = false
set mapreduce.reduce.speculative = false
set mapred.output.compression.codec = org.apache.hadoop.io.compress.GzipCodec
set hive.cache.expr.evaluation=false
set mapred.output.compress=true
set hive.exec.compress.output=true
set mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec
set io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec
set hive.exec.compress.intermediate=true
set mapreduce.map.output.compress=true
set hive.auto.convert.join=false
set mapreduce.map.speculative=false
set mapreduce.reduce.speculative=false
00_000000_1:java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz' at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:198) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:160) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz' at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152) at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:62) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:360) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:172) ... 14 more Caused by: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz' at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97) at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57) at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:299) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203) ... 19 more Caused by: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz' at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.getFileStatus(S3NativeFileSystem.java:818) at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.open(S3NativeFileSystem.java:1193) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:771) at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.open(EmrFileSystem.java:168) at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:109) at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67) at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:297) ... 20 more ], TaskAttempt 2 failed, info=[Error: Error while running task ( failure ) : attempt_1474442135017_0134
set hive.mapjoin.smalltable.filesize = 2000000000
set mapreduce.map.speculative = false
set mapreduce.output.fileoutputformat.compress = true
set hive.exec.compress.output = true
set mapreduce.task.timeout = 6000000
set hive.optimize.bucketmapjoin.sortedmerge = true
set io.compression.codecs = org.apache.hadoop.io.compress.GzipCode
set hive.auto.convert.sortmerge.join.noconditionaltask = false
set hive.optimize.bucketmapjoin = true
set hive.exec.compress.intermediate = true
set hive.enforce.bucketmapjoin = true
set mapred.output.compress = true
set mapreduce.map.output.compress = true
set hive.auto.convert.sortmerge.join = false
set hive.auto.convert.join = false
set mapreduce.reduce.speculative = false
set mapred.output.compression.codec = org.apache.hadoop.io.compress.GzipCodec
set hive.cache.expr.evaluation=false
set mapred.output.compress=true
set hive.exec.compress.output=true
set mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec
set io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec
set hive.exec.compress.intermediate=true
set mapreduce.map.output.compress=true
set hive.auto.convert.join=false
set mapreduce.map.speculative=false
set mapreduce.reduce.speculative=false
00_000000_2:java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz' at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:198) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:160) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz' at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152) at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:62) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:360) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:172) ... 14 more Caused by: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz' at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97) at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57) at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:299) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203) ... 19 more Caused by: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz' at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.getFileStatus(S3NativeFileSystem.java:818) at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.open(S3NativeFileSystem.java:1193) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:771) at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.open(EmrFileSystem.java:168) at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:109) at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67) at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:297) ... 20 more ], TaskAttempt 3 failed, info=[Error: Error while running task ( failure ) : attempt_1474442135017_0134
set hive.mapjoin.smalltable.filesize = 2000000000
set mapreduce.map.speculative = false
set mapreduce.output.fileoutputformat.compress = true
set hive.exec.compress.output = true
set mapreduce.task.timeout = 6000000
set hive.optimize.bucketmapjoin.sortedmerge = true
set io.compression.codecs = org.apache.hadoop.io.compress.GzipCode
set hive.auto.convert.sortmerge.join.noconditionaltask = false
set hive.optimize.bucketmapjoin = true
set hive.exec.compress.intermediate = true
set hive.enforce.bucketmapjoin = true
set mapred.output.compress = true
set mapreduce.map.output.compress = true
set hive.auto.convert.sortmerge.join = false
set hive.auto.convert.join = false
set mapreduce.reduce.speculative = false
set mapred.output.compression.codec = org.apache.hadoop.io.compress.GzipCodec
set hive.cache.expr.evaluation=false
set mapred.output.compress=true
set hive.exec.compress.output=true
set mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec
set io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec
set hive.exec.compress.intermediate=true
set mapreduce.map.output.compress=true
set hive.auto.convert.join=false
set mapreduce.map.speculative=false
set mapreduce.reduce.speculative=false
00_000000_3:java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz' at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:198) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:160) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz' at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152) at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:62) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:360) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:172) ... 14 more

Журналы куста с включенным режимом DEBUG -

Выделены зеленым цветом исключения.

Насколько я понимаю, непосредственно перед исключением он заменил имя файла на другое имя, и все это происходит в S3. Поскольку S3 в конечном итоге согласован, поэтому иногда он показывает это исключение, а иногда и работает файл.

https://docs.google.com/document/d/1cwXVqQ3p-xPFcBqU9AuD7C8z8rHjhUIHwPjY-nVpFK0/edit?usp=sharing

Также установите свойства конфигурации улья перед выполнением запроса -

set hive.mapjoin.smalltable.filesize = 2000000000
set mapreduce.map.speculative = false
set mapreduce.output.fileoutputformat.compress = true
set hive.exec.compress.output = true
set mapreduce.task.timeout = 6000000
set hive.optimize.bucketmapjoin.sortedmerge = true
set io.compression.codecs = org.apache.hadoop.io.compress.GzipCode
set hive.auto.convert.sortmerge.join.noconditionaltask = false
set hive.optimize.bucketmapjoin = true
set hive.exec.compress.intermediate = true
set hive.enforce.bucketmapjoin = true
set mapred.output.compress = true
set mapreduce.map.output.compress = true
set hive.auto.convert.sortmerge.join = false
set hive.auto.convert.join = false
set mapreduce.reduce.speculative = false
set mapred.output.compression.codec = org.apache.hadoop.io.compress.GzipCodec
set hive.cache.expr.evaluation=false
set mapred.output.compress=true
set hive.exec.compress.output=true
set mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec
set io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec
set hive.exec.compress.intermediate=true
set mapreduce.map.output.compress=true
set hive.auto.convert.join=false
set mapreduce.map.speculative=false
set mapreduce.reduce.speculative=false

Подробная информация о кластере -

  1. один Data-узел с 32 ГБ дискового пространства.
  2. Улей - 2.1.0, исполнение на движке - тез 0.8.3
  3. хадуп - 2.7.2

Вопросы-

  1. Почему выбрасывается исключение LeaseExpiredException?
  2. Связана ли ошибка запроса Hive с LeaseExpiredException?
  3. Это из-за неправильных свойств конфигурации улья?

Обновление-1

Согласно этому ответу - LeaseExpiredException: нет ошибки аренды в HDFS (Не удалось закрыть файл),

я добавил

SET hive.exec.max.dynamic.partitions=100000; 
SET hive.exec.max.dynamic.partitions.pernode=100000;

Но затем также показано то же исключение.


person devsda    schedule 21.09.2016    source источник


Ответы (1)


Я решил проблему. Позвольте мне подробно объяснить.

Исключения, которые грядут -

  1. LeaveExpirtedException - со стороны HDFS.
  2. FileNotFoundException - со стороны Hive (когда механизм выполнения Tez выполняет DAG)

Сценарий проблемы -

  1. Мы только что обновили версию улья с 0.13.0 до 2.1.0. И с предыдущей версией все работало нормально. Нулевое исключение времени выполнения.

Различные варианты решения проблемы -

  1. Первая мысль заключалась в том, что два потока работали над одной и той же частью из-за интеллекта NN. Но согласно настройкам ниже

    установить mapreduce.map.speculative = false установить mapreduce.reduce.speculative = false

это было невозможно.

  1. затем я увеличиваю счет с 1000 до 100000 для следующих настроек -

    УСТАНОВИТЕ hive.exec.max.dynamic.partitions = 100000; УСТАНОВИТЕ hive.exec.max.dynamic.partitions.pernode = 100000;

это тоже не сработало.

  1. Затем третья мысль заключалась в том, что определенно в том же процессе созданный mapper-1 был удален другим mapper / reducer. Но в логах Hveserver2, Tez мы не нашли таких логов.

  2. Наконец, основная причина кроется в самом коде прикладного уровня. В версии hive-exec-2.1.0 они представили новое свойство конфигурации

    "hive.exec.stagingdir": ". hive-staging"

Описание вышеуказанной собственности -

Имя каталога, которое будет создано внутри расположения таблиц для поддержки шифрования HDFS. Это заменяет $ {hive.exec.scratchdir} для результатов запроса, за исключением таблиц, доступных только для чтения. Во всех случаях $ {hive.exec.scratchdir} по-прежнему используется для других временных файлов, таких как планы работ.

Поэтому, если есть какие-либо параллельные задания в коде уровня приложения (ETL) и выполняются операции (переименование / удаление / перемещение) в одной и той же таблице, это может привести к этой проблеме.

И в нашем случае 2 параллельных задания выполняют команду «ВСТАВИТЬ ПЕРЕЗАПИСЬ» в одной таблице, что приводит к удалению файла метаданных одного модуля сопоставления, что вызывает эту проблему.

Разрешение -

  1. Переместите расположение файла метаданных во внешнюю таблицу (таблица находится в S3).
  2. Отключите шифрование HDFS (как указано в описании свойства stagingdir.)
  3. Измените код уровня приложения, чтобы избежать проблем с параллелизмом.

Связанный вопрос - Почему файл hive_staging отсутствует в AWS EMR

person devsda    schedule 30.09.2016