org.apache.spark.api.java.JavaRDD.saveAsObjectFile()方法的使用及代码示例

x33g5p2x  于2022-01-21 转载在 其他  
字(3.1k)|赞(0)|评价(0)|浏览(80)

本文整理了Java中org.apache.spark.api.java.JavaRDD.saveAsObjectFile()方法的一些代码示例,展示了JavaRDD.saveAsObjectFile()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。JavaRDD.saveAsObjectFile()方法的具体详情如下:
包路径:org.apache.spark.api.java.JavaRDD
类名称:JavaRDD
方法名:saveAsObjectFile

JavaRDD.saveAsObjectFile介绍

暂无

代码示例

代码示例来源:origin: org.apache.spark/spark-core_2.11

@Test
public void objectFilesOfInts() {
 String outputDir = new File(tempDir, "output").getAbsolutePath();
 JavaRDD<Integer> rdd = sc.parallelize(Arrays.asList(1, 2, 3, 4));
 rdd.saveAsObjectFile(outputDir);
 // Try reading the output back as an object file
 List<Integer> expected = Arrays.asList(1, 2, 3, 4);
 JavaRDD<Integer> readRDD = sc.objectFile(outputDir);
 assertEquals(expected, readRDD.collect());
}

代码示例来源:origin: org.apache.spark/spark-core_2.10

@Test
public void objectFilesOfInts() {
 String outputDir = new File(tempDir, "output").getAbsolutePath();
 JavaRDD<Integer> rdd = sc.parallelize(Arrays.asList(1, 2, 3, 4));
 rdd.saveAsObjectFile(outputDir);
 // Try reading the output back as an object file
 List<Integer> expected = Arrays.asList(1, 2, 3, 4);
 JavaRDD<Integer> readRDD = sc.objectFile(outputDir);
 assertEquals(expected, readRDD.collect());
}

代码示例来源:origin: org.apache.spark/spark-core

@Test
public void objectFilesOfInts() {
 String outputDir = new File(tempDir, "output").getAbsolutePath();
 JavaRDD<Integer> rdd = sc.parallelize(Arrays.asList(1, 2, 3, 4));
 rdd.saveAsObjectFile(outputDir);
 // Try reading the output back as an object file
 List<Integer> expected = Arrays.asList(1, 2, 3, 4);
 JavaRDD<Integer> readRDD = sc.objectFile(outputDir);
 assertEquals(expected, readRDD.collect());
}

代码示例来源:origin: scipr-lab/dizk

public static void generateG2Data(final Configuration config, final long size, final String path) {
    final String objectPath = path + "/vmsm-" + size;
    SparkUtils.cleanDirectory(objectPath);
    generateG2Data(config, size).saveAsObjectFile(objectPath);
  }
}

代码示例来源:origin: scipr-lab/dizk

public static void generateG1Data(final Configuration config, final long size, final String path) {
  final String objectPath = path + "/vmsm-" + size;
  SparkUtils.cleanDirectory(objectPath);
  generateG1Data(config, size).saveAsObjectFile(objectPath);
}

代码示例来源:origin: org.qcri.rheem/rheem-spark

@Override
public Tuple<Collection<ExecutionLineageNode>, Collection<ChannelInstance>> evaluate(
    ChannelInstance[] inputs,
    ChannelInstance[] outputs,
    SparkExecutor sparkExecutor,
    OptimizationContext.OperatorContext operatorContext) {
  assert inputs.length == this.getNumInputs();
  assert outputs.length <= 1;
  final FileChannel.Instance output = (FileChannel.Instance) outputs[0];
  final String targetPath = output.addGivenOrTempPath(this.targetPath, sparkExecutor.getConfiguration());
  RddChannel.Instance input = (RddChannel.Instance) inputs[0];
  input.provideRdd()
      .coalesce(1) // TODO: Remove. This only hotfixes the issue that JavaObjectFileSource reads only a single file.
      .saveAsObjectFile(targetPath);
  LoggerFactory.getLogger(this.getClass()).info("Writing dataset to {}.", targetPath);
  return ExecutionOperator.modelEagerExecution(inputs, outputs, operatorContext);
}

代码示例来源:origin: phuonglh/vn.vitk

jsc.parallelize(Arrays.asList(model), 1).saveAsObjectFile(classifierPath);

相关文章

微信公众号

最新文章

更多