本文整理了Java中org.apache.spark.SparkContext.unpersistRDD()
方法的一些代码示例,展示了SparkContext.unpersistRDD()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。SparkContext.unpersistRDD()
方法的具体详情如下:
包路径:org.apache.spark.SparkContext
类名称:SparkContext
方法名:unpersistRDD
暂无
代码示例来源:origin: apache/hive
@Override
public void cleanup() {
jobMetricsListener.cleanup(jobId);
if (cachedRDDIds != null) {
for (Integer cachedRDDId: cachedRDDIds) {
sparkContext.sc().unpersistRDD(cachedRDDId, false);
}
}
}
代码示例来源:origin: apache/hive
/**
* Release cached RDDs as soon as the job is done.
* This is different from local Spark client so as
* to save a RPC call/trip, avoid passing cached RDD
* id information around. Otherwise, we can follow
* the local Spark client way to be consistent.
*/
void releaseCache() {
if (cachedRDDIds != null) {
for (Integer cachedRDDId: cachedRDDIds) {
jc.sc().sc().unpersistRDD(cachedRDDId, false);
}
}
}
代码示例来源:origin: apache/drill
@Override
public void cleanup() {
jobMetricsListener.cleanup(jobId);
if (cachedRDDIds != null) {
for (Integer cachedRDDId: cachedRDDIds) {
sparkContext.sc().unpersistRDD(cachedRDDId, false);
}
}
}
代码示例来源:origin: com.facebook.presto.hive/hive-apache
@Override
public void cleanup() {
jobMetricsListener.cleanup(jobId);
if (cachedRDDIds != null) {
for (Integer cachedRDDId: cachedRDDIds) {
sparkContext.sc().unpersistRDD(cachedRDDId, false);
}
}
}
代码示例来源:origin: com.github.hyukjinkwon/spark-client
/**
* Release cached RDDs as soon as the job is done.
* This is different from local Spark client so as
* to save a RPC call/trip, avoid passing cached RDD
* id information around. Otherwise, we can follow
* the local Spark client way to be consistent.
*/
void releaseCache() {
if (cachedRDDIds != null) {
for (Integer cachedRDDId: cachedRDDIds) {
jc.sc().sc().unpersistRDD(cachedRDDId, false);
}
}
}
代码示例来源:origin: org.spark-project.hive/spark-client
/**
* Release cached RDDs as soon as the job is done.
* This is different from local Spark client so as
* to save a RPC call/trip, avoid passing cached RDD
* id information around. Otherwise, we can follow
* the local Spark client way to be consistent.
*/
void releaseCache() {
if (cachedRDDIds != null) {
for (Integer cachedRDDId: cachedRDDIds) {
jc.sc().sc().unpersistRDD(cachedRDDId, false);
}
}
}
内容来源于网络,如有侵权,请联系作者删除!