本文整理了Java中org.apache.spark.api.java.JavaPairRDD.toLocalIterator()
方法的一些代码示例,展示了JavaPairRDD.toLocalIterator()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。JavaPairRDD.toLocalIterator()
方法的具体详情如下:
包路径:org.apache.spark.api.java.JavaPairRDD
类名称:JavaPairRDD
方法名:toLocalIterator
暂无
代码示例来源:origin: apache/tinkerpop
final InputRDD inputRDD = (InputRDD) Class.forName(sparkConfiguration.get(Constants.GREMLIN_HADOOP_GRAPH_READER)).newInstance();
final JavaSparkContext javaSparkContext = new JavaSparkContext(Spark.create(sparkConfiguration));
final Iterator<Tuple2<Object, VertexWritable>> iterator = inputRDD.readGraphRDD(ConfUtil.makeApacheConfiguration(taskAttemptContext.getConfiguration()), javaSparkContext).toLocalIterator();
return new RecordReader<NullWritable, VertexWritable>() {
@Override
代码示例来源:origin: com.davidbracewell/mango
@Override
public void forEachLocal(@NonNull SerializableBiConsumer<? super T, ? super U> consumer) {
rdd.toLocalIterator().forEachRemaining(e -> {
Configurator.INSTANCE.configure(configBroadcast.value());
consumer.accept(e._1(), e._2());
});
}
代码示例来源:origin: org.apache.tinkerpop/spark-gremlin
final InputRDD inputRDD = (InputRDD) Class.forName(sparkConfiguration.get(Constants.GREMLIN_HADOOP_GRAPH_READER)).newInstance();
final JavaSparkContext javaSparkContext = new JavaSparkContext(Spark.create(sparkConfiguration));
final Iterator<Tuple2<Object, VertexWritable>> iterator = inputRDD.readGraphRDD(ConfUtil.makeApacheConfiguration(taskAttemptContext.getConfiguration()), javaSparkContext).toLocalIterator();
return new RecordReader<NullWritable, VertexWritable>() {
@Override
内容来源于网络,如有侵权,请联系作者删除!