org.apache.hadoop.hive.ql.exec.Utilities.clearWorkMap()方法的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(5.1k)|赞(0)|评价(0)|浏览(198)

本文整理了Java中org.apache.hadoop.hive.ql.exec.Utilities.clearWorkMap()方法的一些代码示例,展示了Utilities.clearWorkMap()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Utilities.clearWorkMap()方法的具体详情如下:
包路径:org.apache.hadoop.hive.ql.exec.Utilities
类名称:Utilities
方法名:clearWorkMap

Utilities.clearWorkMap介绍

暂无

代码示例

代码示例来源:origin: apache/hive

Utilities.clearWorkMap(jc);

代码示例来源:origin: apache/drill

Utilities.clearWorkMap(jc);

代码示例来源:origin: apache/drill

Utilities.clearWorkMap(jc);

代码示例来源:origin: apache/hive

Utilities.clearWorkMap(jc);

代码示例来源:origin: apache/hive

Utilities.clearWorkMap(jc);

代码示例来源:origin: apache/hive

Utilities.clearWorkMap(jc);

代码示例来源:origin: apache/drill

Utilities.clearWorkMap(jc);

代码示例来源:origin: apache/hive

Utilities.clearWorkMap(conf);
perfLogger.PerfLogEnd(CLASS_NAME, PerfLogger.TEZ_BUILD_DAG);
return dag;

代码示例来源:origin: apache/drill

Utilities.clearWorkMap(jc);

代码示例来源:origin: apache/drill

public SparkPlan generate(SparkWork sparkWork) throws Exception {
 perfLogger.PerfLogBegin(CLASS_NAME, PerfLogger.SPARK_BUILD_PLAN);
 SparkPlan sparkPlan = new SparkPlan();
 cloneToWork = sparkWork.getCloneToWork();
 workToTranMap.clear();
 workToParentWorkTranMap.clear();
 try {
  for (BaseWork work : sparkWork.getAllWork()) {
   perfLogger.PerfLogBegin(CLASS_NAME, PerfLogger.SPARK_CREATE_TRAN + work.getName());
   SparkTran tran = generate(work, sparkWork);
   SparkTran parentTran = generateParentTran(sparkPlan, sparkWork, work);
   sparkPlan.addTran(tran);
   sparkPlan.connect(parentTran, tran);
   workToTranMap.put(work, tran);
   perfLogger.PerfLogEnd(CLASS_NAME, PerfLogger.SPARK_CREATE_TRAN + work.getName());
  }
 } finally {
  // clear all ThreadLocal cached MapWork/ReduceWork after plan generation
  // as this may executed in a pool thread.
  Utilities.clearWorkMap(jobConf);
 }
 perfLogger.PerfLogEnd(CLASS_NAME, PerfLogger.SPARK_BUILD_PLAN);
 return sparkPlan;
}

代码示例来源:origin: apache/drill

Utilities.clearWorkMap(conf);
perfLogger.PerfLogEnd(CLASS_NAME, PerfLogger.TEZ_BUILD_DAG);
return dag;

代码示例来源:origin: apache/hive

@Override
void close() {
 if (cache != null && cacheKey != null) {
  cache.release(cacheKey);
 }
 // check if there are IOExceptions
 if (!isAborted()) {
  setAborted(execContext.getIoCxt().getIOExceptions());
 }
 // detecting failed executions by exceptions thrown by the operator tree
 try {
  if (mergeOp == null || mfWork == null) {
   return;
  }
  boolean abort = isAborted();
  mergeOp.close(abort);
  ExecMapper.ReportStats rps = new ExecMapper.ReportStats(reporter, jconf);
  mergeOp.preorderMap(rps);
 } catch (Exception e) {
  if (!isAborted()) {
   // signal new failure to map-reduce
   l4j.error("Hit error while closing operators - failing tree");
   throw new RuntimeException("Hive Runtime Error while closing operators",
     e);
  }
 } finally {
  Utilities.clearWorkMap(jconf);
  MapredContext.close();
 }
}

代码示例来源:origin: apache/drill

@Override
void close() {
 if (cache != null && cacheKey != null) {
  cache.release(cacheKey);
 }
 // check if there are IOExceptions
 if (!isAborted()) {
  setAborted(execContext.getIoCxt().getIOExceptions());
 }
 // detecting failed executions by exceptions thrown by the operator tree
 try {
  if (mergeOp == null || mfWork == null) {
   return;
  }
  boolean abort = isAborted();
  mergeOp.close(abort);
  ExecMapper.ReportStats rps = new ExecMapper.ReportStats(reporter, jconf);
  mergeOp.preorderMap(rps);
 } catch (Exception e) {
  if (!isAborted()) {
   // signal new failure to map-reduce
   l4j.error("Hit error while closing operators - failing tree");
   throw new RuntimeException("Hive Runtime Error while closing operators",
     e);
  }
 } finally {
  Utilities.clearWorkMap(jconf);
  MapredContext.close();
 }
}

代码示例来源:origin: apache/hive

) throws IOException, HiveException {
JobConf conf = new JobConf();
Utilities.clearWorkMap(conf);
conf.set("hive.exec.plan", workDir.toString());
conf.set("mapred.job.tracker", "local");

代码示例来源:origin: apache/hive

public SparkPlan generate(SparkWork sparkWork) throws Exception {
 perfLogger.PerfLogBegin(CLASS_NAME, PerfLogger.SPARK_BUILD_PLAN);
 SparkPlan sparkPlan = new SparkPlan(this.jobConf, this.sc.sc());
 cloneToWork = sparkWork.getCloneToWork();
 workToTranMap.clear();
 workToParentWorkTranMap.clear();
 try {
  for (BaseWork work : sparkWork.getAllWork()) {
   // Run the SparkDynamicPartitionPruner, we run this here instead of inside the
   // InputFormat so that we don't have to run pruning when creating a Record Reader
   runDynamicPartitionPruner(work);
   perfLogger.PerfLogBegin(CLASS_NAME, PerfLogger.SPARK_CREATE_TRAN + work.getName());
   SparkTran tran = generate(work, sparkWork);
   SparkTran parentTran = generateParentTran(sparkPlan, sparkWork, work);
   sparkPlan.addTran(tran);
   sparkPlan.connect(parentTran, tran);
   workToTranMap.put(work, tran);
   perfLogger.PerfLogEnd(CLASS_NAME, PerfLogger.SPARK_CREATE_TRAN + work.getName());
  }
 } finally {
  // clear all ThreadLocal cached MapWork/ReduceWork after plan generation
  // as this may executed in a pool thread.
  Utilities.clearWorkMap(jobConf);
 }
 perfLogger.PerfLogEnd(CLASS_NAME, PerfLogger.SPARK_BUILD_PLAN);
 return sparkPlan;
}

代码示例来源:origin: apache/hive

Utilities.clearWorkMap(jconf);
MapredContext.close();

代码示例来源:origin: apache/hive

Utilities.clearWorkMap(jconf);
MapredContext.close();

代码示例来源:origin: apache/drill

Utilities.clearWorkMap(jconf);
MapredContext.close();

代码示例来源:origin: apache/drill

Utilities.clearWorkMap(jconf);
MapredContext.close();

代码示例来源:origin: apache/phoenix

Utilities.clearWorkMap(conf);
CliSessionState ss = new CliSessionState(conf);
assert ss != null;

相关文章

微信公众号

最新文章

更多

Utilities类方法