本文整理了Java中org.apache.spark.api.java.JavaSparkContext.objectFile()
方法的一些代码示例,展示了JavaSparkContext.objectFile()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。JavaSparkContext.objectFile()
方法的具体详情如下:
包路径:org.apache.spark.api.java.JavaSparkContext
类名称:JavaSparkContext
方法名:objectFile
暂无
代码示例来源:origin: org.apache.spark/spark-core
@SuppressWarnings("unchecked")
@Test
public void objectFilesOfComplexTypes() {
String outputDir = new File(tempDir, "output").getAbsolutePath();
List<Tuple2<Integer, String>> pairs = Arrays.asList(
new Tuple2<>(1, "a"),
new Tuple2<>(2, "aa"),
new Tuple2<>(3, "aaa")
);
JavaPairRDD<Integer, String> rdd = sc.parallelizePairs(pairs);
rdd.saveAsObjectFile(outputDir);
// Try reading the output back as an object file
JavaRDD<Tuple2<Integer, String>> readRDD = sc.objectFile(outputDir);
assertEquals(pairs, readRDD.collect());
}
代码示例来源:origin: org.apache.spark/spark-core_2.10
@SuppressWarnings("unchecked")
@Test
public void objectFilesOfComplexTypes() {
String outputDir = new File(tempDir, "output").getAbsolutePath();
List<Tuple2<Integer, String>> pairs = Arrays.asList(
new Tuple2<>(1, "a"),
new Tuple2<>(2, "aa"),
new Tuple2<>(3, "aaa")
);
JavaPairRDD<Integer, String> rdd = sc.parallelizePairs(pairs);
rdd.saveAsObjectFile(outputDir);
// Try reading the output back as an object file
JavaRDD<Tuple2<Integer, String>> readRDD = sc.objectFile(outputDir);
assertEquals(pairs, readRDD.collect());
}
代码示例来源:origin: org.apache.spark/spark-core_2.11
@SuppressWarnings("unchecked")
@Test
public void objectFilesOfComplexTypes() {
String outputDir = new File(tempDir, "output").getAbsolutePath();
List<Tuple2<Integer, String>> pairs = Arrays.asList(
new Tuple2<>(1, "a"),
new Tuple2<>(2, "aa"),
new Tuple2<>(3, "aaa")
);
JavaPairRDD<Integer, String> rdd = sc.parallelizePairs(pairs);
rdd.saveAsObjectFile(outputDir);
// Try reading the output back as an object file
JavaRDD<Tuple2<Integer, String>> readRDD = sc.objectFile(outputDir);
assertEquals(pairs, readRDD.collect());
}
代码示例来源:origin: org.apache.spark/spark-core_2.11
@Test
public void objectFilesOfInts() {
String outputDir = new File(tempDir, "output").getAbsolutePath();
JavaRDD<Integer> rdd = sc.parallelize(Arrays.asList(1, 2, 3, 4));
rdd.saveAsObjectFile(outputDir);
// Try reading the output back as an object file
List<Integer> expected = Arrays.asList(1, 2, 3, 4);
JavaRDD<Integer> readRDD = sc.objectFile(outputDir);
assertEquals(expected, readRDD.collect());
}
代码示例来源:origin: org.apache.spark/spark-core_2.10
@Test
public void objectFilesOfInts() {
String outputDir = new File(tempDir, "output").getAbsolutePath();
JavaRDD<Integer> rdd = sc.parallelize(Arrays.asList(1, 2, 3, 4));
rdd.saveAsObjectFile(outputDir);
// Try reading the output back as an object file
List<Integer> expected = Arrays.asList(1, 2, 3, 4);
JavaRDD<Integer> readRDD = sc.objectFile(outputDir);
assertEquals(expected, readRDD.collect());
}
代码示例来源:origin: org.apache.spark/spark-core
@Test
public void objectFilesOfInts() {
String outputDir = new File(tempDir, "output").getAbsolutePath();
JavaRDD<Integer> rdd = sc.parallelize(Arrays.asList(1, 2, 3, 4));
rdd.saveAsObjectFile(outputDir);
// Try reading the output back as an object file
List<Integer> expected = Arrays.asList(1, 2, 3, 4);
JavaRDD<Integer> readRDD = sc.objectFile(outputDir);
assertEquals(expected, readRDD.collect());
}
代码示例来源:origin: phuonglh/vn.vitk
/**
* Loads a MLP transition classifier from a file, which is pre-trained.
* @param jsc
* @param classifierFileName
*/
public static MultilayerPerceptronClassificationModel load(JavaSparkContext jsc, String classifierFileName) {
Object object = jsc.objectFile(classifierFileName).first();
return ((MultilayerPerceptronClassificationModel)object);
}
代码示例来源:origin: org.qcri.rheem/rheem-spark
@Override
public Tuple<Collection<ExecutionLineageNode>, Collection<ChannelInstance>> evaluate(
ChannelInstance[] inputs,
ChannelInstance[] outputs,
SparkExecutor sparkExecutor,
OptimizationContext.OperatorContext operatorContext) {
final String sourcePath;
if (this.sourcePath != null) {
assert inputs.length == 0;
sourcePath = this.sourcePath;
} else {
FileChannel.Instance input = (FileChannel.Instance) inputs[0];
sourcePath = input.getSinglePath();
}
RddChannel.Instance output = (RddChannel.Instance) outputs[0];
final String actualInputPath = FileSystems.findActualSingleInputPath(sourcePath);
final JavaRDD<Object> rdd = sparkExecutor.sc.objectFile(actualInputPath);
this.name(rdd);
output.accept(rdd, sparkExecutor);
return ExecutionOperator.modelLazyExecution(inputs, outputs, operatorContext);
}
内容来源于网络,如有侵权,请联系作者删除!