org.apache.spark.api.java.JavaRDD.unpersist()方法的使用及代码示例

x33g5p2x  于2022-01-21 转载在 其他  
字(4.4k)|赞(0)|评价(0)|浏览(100)

本文整理了Java中org.apache.spark.api.java.JavaRDD.unpersist()方法的一些代码示例,展示了JavaRDD.unpersist()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。JavaRDD.unpersist()方法的具体详情如下:
包路径:org.apache.spark.api.java.JavaRDD
类名称:JavaRDD
方法名:unpersist

JavaRDD.unpersist介绍

暂无

代码示例

代码示例来源:origin: OryxProject/oryx

newData.unpersist();
pastData.unpersist();

代码示例来源:origin: OryxProject/oryx

pmmlToMFModel(sparkContext, model, modelParentPath, bUserIDToIndex, bItemIDToIndex);
parsedTestRDD.unpersist();

代码示例来源:origin: OryxProject/oryx

bItemIDToIndex.unpersist();
parsedRDD.unpersist();

代码示例来源:origin: uber/marmaray

public void close() {
    if (this.groupRDD.isPresent()) {
      this.groupRDD.get().unpersist();
      this.groupRDD = Optional.absent();
    }
  }
}

代码示例来源:origin: com.davidbracewell/mango

@Override
public void close() throws IOException {
 this.rdd.unpersist();
 if (onClose != null) {
   onClose.run();
 }
}

代码示例来源:origin: com.cloudera.oryx/oryx-ml

newData.unpersist();
pastData.unpersist();

代码示例来源:origin: deepspark/deepspark

train_samples.unpersist();
System.out.println("Start Learning...");
Date startTime = new Date();

代码示例来源:origin: deepspark/deepspark

train_samples.unpersist();
System.out.println("Start Learning...");
Date startTime = new Date();

代码示例来源:origin: uber/hudi

@Override
public JavaRDD<HoodieRecord<T>> tagLocation(JavaRDD<HoodieRecord<T>> recordRDD, JavaSparkContext jsc,
  HoodieTable<T> hoodieTable) {
 // Step 0: cache the input record RDD
 if (config.getBloomIndexUseCaching()) {
  recordRDD.persist(config.getBloomIndexInputStorageLevel());
 }
 // Step 1: Extract out thinner JavaPairRDD of (partitionPath, recordKey)
 JavaPairRDD<String, String> partitionRecordKeyPairRDD = recordRDD
   .mapToPair(record -> new Tuple2<>(record.getPartitionPath(), record.getRecordKey()));
 // Lookup indexes for all the partition/recordkey pair
 JavaPairRDD<String, String> rowKeyFilenamePairRDD = lookupIndex(partitionRecordKeyPairRDD, jsc, hoodieTable);
 // Cache the result, for subsequent stages.
 if (config.getBloomIndexUseCaching()) {
  rowKeyFilenamePairRDD.persist(StorageLevel.MEMORY_AND_DISK_SER());
 }
 if (logger.isDebugEnabled()) {
  long totalTaggedRecords = rowKeyFilenamePairRDD.count();
  logger.debug("Number of update records (ones tagged with a fileID): " + totalTaggedRecords);
 }
 // Step 4: Tag the incoming records, as inserts or updates, by joining with existing record keys
 // Cost: 4 sec.
 JavaRDD<HoodieRecord<T>> taggedRecordRDD = tagLocationBacktoRecords(rowKeyFilenamePairRDD,
   recordRDD);
 if (config.getBloomIndexUseCaching()) {
  recordRDD.unpersist(); // unpersist the input Record RDD
  rowKeyFilenamePairRDD.unpersist();
 }
 return taggedRecordRDD;
}

代码示例来源:origin: com.uber.hoodie/hoodie-client

@Override
public JavaRDD<HoodieRecord<T>> tagLocation(JavaRDD<HoodieRecord<T>> recordRDD, JavaSparkContext jsc,
  HoodieTable<T> hoodieTable) {
 // Step 0: cache the input record RDD
 if (config.getBloomIndexUseCaching()) {
  recordRDD.persist(config.getBloomIndexInputStorageLevel());
 }
 // Step 1: Extract out thinner JavaPairRDD of (partitionPath, recordKey)
 JavaPairRDD<String, String> partitionRecordKeyPairRDD = recordRDD
   .mapToPair(record -> new Tuple2<>(record.getPartitionPath(), record.getRecordKey()));
 // Lookup indexes for all the partition/recordkey pair
 JavaPairRDD<String, String> rowKeyFilenamePairRDD = lookupIndex(partitionRecordKeyPairRDD, jsc, hoodieTable);
 // Cache the result, for subsequent stages.
 if (config.getBloomIndexUseCaching()) {
  rowKeyFilenamePairRDD.persist(StorageLevel.MEMORY_AND_DISK_SER());
 }
 if (logger.isDebugEnabled()) {
  long totalTaggedRecords = rowKeyFilenamePairRDD.count();
  logger.debug("Number of update records (ones tagged with a fileID): " + totalTaggedRecords);
 }
 // Step 4: Tag the incoming records, as inserts or updates, by joining with existing record keys
 // Cost: 4 sec.
 JavaRDD<HoodieRecord<T>> taggedRecordRDD = tagLocationBacktoRecords(rowKeyFilenamePairRDD,
   recordRDD);
 if (config.getBloomIndexUseCaching()) {
  recordRDD.unpersist(); // unpersist the input Record RDD
  rowKeyFilenamePairRDD.unpersist();
 }
 return taggedRecordRDD;
}

代码示例来源:origin: edu.usc.ir/age-predictor-cli

MaxentModel ageModel = trainer.train(eventStream);
samples.unpersist();
data.unpersist();

代码示例来源:origin: edu.usc.ir/age-predictor-cli

}).cache();
samples.unpersist();

代码示例来源:origin: edu.usc.ir/age-predictor-cli

}).cache();
samples.unpersist();

代码示例来源:origin: org.datavec/datavec-spark_2.11

hist = drdd.histogram(maxHistogramBuckets);
seqLengths.unpersist();

代码示例来源:origin: org.datavec/datavec-spark

hist = drdd.histogram(maxHistogramBuckets);
seqLengths.unpersist();

相关文章

微信公众号

最新文章

更多