本文整理了Java中org.apache.spark.api.java.JavaRDD.name()
方法的一些代码示例,展示了JavaRDD.name()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。JavaRDD.name()
方法的具体详情如下:
包路径:org.apache.spark.api.java.JavaRDD
类名称:JavaRDD
方法名:name
暂无
代码示例来源:origin: org.apache.spark/spark-core_2.11
@Test
public void testGetPersistentRDDs() {
java.util.Map<Integer, JavaRDD<?>> cachedRddsMap = sc.getPersistentRDDs();
assertTrue(cachedRddsMap.isEmpty());
JavaRDD<String> rdd1 = sc.parallelize(Arrays.asList("a", "b")).setName("RDD1").cache();
JavaRDD<String> rdd2 = sc.parallelize(Arrays.asList("c", "d")).setName("RDD2").cache();
cachedRddsMap = sc.getPersistentRDDs();
assertEquals(2, cachedRddsMap.size());
assertEquals("RDD1", cachedRddsMap.get(0).name());
assertEquals("RDD2", cachedRddsMap.get(1).name());
}
代码示例来源:origin: org.apache.spark/spark-core_2.10
@Test
public void testGetPersistentRDDs() {
java.util.Map<Integer, JavaRDD<?>> cachedRddsMap = sc.getPersistentRDDs();
assertTrue(cachedRddsMap.isEmpty());
JavaRDD<String> rdd1 = sc.parallelize(Arrays.asList("a", "b")).setName("RDD1").cache();
JavaRDD<String> rdd2 = sc.parallelize(Arrays.asList("c", "d")).setName("RDD2").cache();
cachedRddsMap = sc.getPersistentRDDs();
assertEquals(2, cachedRddsMap.size());
assertEquals("RDD1", cachedRddsMap.get(0).name());
assertEquals("RDD2", cachedRddsMap.get(1).name());
}
代码示例来源:origin: org.apache.spark/spark-core
@Test
public void testGetPersistentRDDs() {
java.util.Map<Integer, JavaRDD<?>> cachedRddsMap = sc.getPersistentRDDs();
assertTrue(cachedRddsMap.isEmpty());
JavaRDD<String> rdd1 = sc.parallelize(Arrays.asList("a", "b")).setName("RDD1").cache();
JavaRDD<String> rdd2 = sc.parallelize(Arrays.asList("c", "d")).setName("RDD2").cache();
cachedRddsMap = sc.getPersistentRDDs();
assertEquals(2, cachedRddsMap.size());
assertEquals("RDD1", cachedRddsMap.get(0).name());
assertEquals("RDD2", cachedRddsMap.get(1).name());
}
代码示例来源:origin: uber/marmaray
public final void execute() {
this.forkFunction.registerAccumulators(this.inputRDD.rdd().sparkContext());
// Converts JavaRDD<T> -> JavaRDD<List<Integer>, T>
JavaRDD<ForkData<DI>> forkedData = this.inputRDD.flatMap(this.forkFunction)
.persist(this.persistLevel);
final String jobName = SparkJobTracker.getJobName(this.inputRDD.rdd().sparkContext());
forkedData.setName(String.format("%s-%s", jobName, forkedData.id()));
// deliberately calling count so that DAG gets executed.
final long processedRecords = forkedData.count();
final Optional<RDDInfo> rddInfo = SparkUtil.getRddInfo(forkedData.context(), forkedData.id());
log.info("#processed records :{} name:{}", processedRecords, forkedData.name());
if (rddInfo.isPresent()) {
final long size = rddInfo.get().diskSize() + rddInfo.get().memSize();
log.info("rddInfo -> name:{} partitions:{} size:{}", forkedData.name(), rddInfo.get().numPartitions(),
size);
}
this.groupRDD = Optional.of(forkedData);
}
内容来源于网络,如有侵权,请联系作者删除!