org.apache.spark.SparkContext.unpersistRDD()方法的使用及代码示例

x33g5p2x  于2022-01-30 转载在 其他  
字(2.1k)|赞(0)|评价(0)|浏览(125)

本文整理了Java中org.apache.spark.SparkContext.unpersistRDD()方法的一些代码示例,展示了SparkContext.unpersistRDD()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。SparkContext.unpersistRDD()方法的具体详情如下:
包路径:org.apache.spark.SparkContext
类名称:SparkContext
方法名:unpersistRDD

SparkContext.unpersistRDD介绍

暂无

代码示例

代码示例来源:origin: apache/hive

@Override
public void cleanup() {
 jobMetricsListener.cleanup(jobId);
 if (cachedRDDIds != null) {
  for (Integer cachedRDDId: cachedRDDIds) {
   sparkContext.sc().unpersistRDD(cachedRDDId, false);
  }
 }
}

代码示例来源:origin: apache/hive

/**
 * Release cached RDDs as soon as the job is done.
 * This is different from local Spark client so as
 * to save a RPC call/trip, avoid passing cached RDD
 * id information around. Otherwise, we can follow
 * the local Spark client way to be consistent.
 */
void releaseCache() {
 if (cachedRDDIds != null) {
  for (Integer cachedRDDId: cachedRDDIds) {
   jc.sc().sc().unpersistRDD(cachedRDDId, false);
  }
 }
}

代码示例来源:origin: apache/drill

@Override
public void cleanup() {
 jobMetricsListener.cleanup(jobId);
 if (cachedRDDIds != null) {
  for (Integer cachedRDDId: cachedRDDIds) {
   sparkContext.sc().unpersistRDD(cachedRDDId, false);
  }
 }
}

代码示例来源:origin: com.facebook.presto.hive/hive-apache

@Override
public void cleanup() {
 jobMetricsListener.cleanup(jobId);
 if (cachedRDDIds != null) {
  for (Integer cachedRDDId: cachedRDDIds) {
   sparkContext.sc().unpersistRDD(cachedRDDId, false);
  }
 }
}

代码示例来源:origin: com.github.hyukjinkwon/spark-client

/**
 * Release cached RDDs as soon as the job is done.
 * This is different from local Spark client so as
 * to save a RPC call/trip, avoid passing cached RDD
 * id information around. Otherwise, we can follow
 * the local Spark client way to be consistent.
 */
void releaseCache() {
 if (cachedRDDIds != null) {
  for (Integer cachedRDDId: cachedRDDIds) {
   jc.sc().sc().unpersistRDD(cachedRDDId, false);
  }
 }
}

代码示例来源:origin: org.spark-project.hive/spark-client

/**
 * Release cached RDDs as soon as the job is done.
 * This is different from local Spark client so as
 * to save a RPC call/trip, avoid passing cached RDD
 * id information around. Otherwise, we can follow
 * the local Spark client way to be consistent.
 */
void releaseCache() {
 if (cachedRDDIds != null) {
  for (Integer cachedRDDId: cachedRDDIds) {
   jc.sc().sc().unpersistRDD(cachedRDDId, false);
  }
 }
}

相关文章