org.apache.spark.api.java.JavaRDD.cartesian()方法的使用及代码示例

x33g5p2x  于2022-01-21 转载在 其他  
字(3.4k)|赞(0)|评价(0)|浏览(96)

本文整理了Java中org.apache.spark.api.java.JavaRDD.cartesian()方法的一些代码示例,展示了JavaRDD.cartesian()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。JavaRDD.cartesian()方法的具体详情如下:
包路径:org.apache.spark.api.java.JavaRDD
类名称:JavaRDD
方法名:cartesian

JavaRDD.cartesian介绍

暂无

代码示例

代码示例来源:origin: org.apache.spark/spark-core_2.10

@Test
public void cartesian() {
 JavaDoubleRDD doubleRDD = sc.parallelizeDoubles(Arrays.asList(1.0, 1.0, 2.0, 3.0, 5.0, 8.0));
 JavaRDD<String> stringRDD = sc.parallelize(Arrays.asList("Hello", "World"));
 JavaPairRDD<String, Double> cartesian = stringRDD.cartesian(doubleRDD);
 assertEquals(new Tuple2<>("Hello", 1.0), cartesian.first());
}

代码示例来源:origin: org.apache.spark/spark-core_2.11

@Test
public void cartesian() {
 JavaDoubleRDD doubleRDD = sc.parallelizeDoubles(Arrays.asList(1.0, 1.0, 2.0, 3.0, 5.0, 8.0));
 JavaRDD<String> stringRDD = sc.parallelize(Arrays.asList("Hello", "World"));
 JavaPairRDD<String, Double> cartesian = stringRDD.cartesian(doubleRDD);
 assertEquals(new Tuple2<>("Hello", 1.0), cartesian.first());
}

代码示例来源:origin: org.apache.spark/spark-core

@Test
public void cartesian() {
 JavaDoubleRDD doubleRDD = sc.parallelizeDoubles(Arrays.asList(1.0, 1.0, 2.0, 3.0, 5.0, 8.0));
 JavaRDD<String> stringRDD = sc.parallelize(Arrays.asList("Hello", "World"));
 JavaPairRDD<String, Double> cartesian = stringRDD.cartesian(doubleRDD);
 assertEquals(new Tuple2<>("Hello", 1.0), cartesian.first());
}

代码示例来源:origin: mahmoudparsian/data-algorithms-book

JavaPairRDD<String,String> cart = R.cartesian(S);
cart.saveAsTextFile(outputPath+"/cart");

代码示例来源:origin: mahmoudparsian/data-algorithms-book

JavaPairRDD<String,String> cart = R.cartesian(S);
cart.saveAsTextFile(outputPath+"/cart");

代码示例来源:origin: crcsmnky/mongodb-spark-demo

JavaPairRDD<Object,Object> usersMovies = userData.cartesian(movieData);

代码示例来源:origin: org.qcri.rheem/rheem-spark

@Override
public Tuple<Collection<ExecutionLineageNode>, Collection<ChannelInstance>> evaluate(
    ChannelInstance[] inputs,
    ChannelInstance[] outputs,
    SparkExecutor sparkExecutor,
    OptimizationContext.OperatorContext operatorContext) {
  assert inputs.length == this.getNumInputs();
  assert outputs.length == this.getNumOutputs();
  final RddChannel.Instance input0 = (RddChannel.Instance) inputs[0];
  final RddChannel.Instance input1 = (RddChannel.Instance) inputs[1];
  final RddChannel.Instance output = (RddChannel.Instance) outputs[0];
  final JavaRDD<InputType0> rdd0 = input0.provideRdd();
  final JavaRDD<InputType1> rdd1 = input1.provideRdd();
  final JavaPairRDD<InputType0, InputType1> cartesian = rdd0.cartesian(rdd1);
  this.name(cartesian);
  final JavaRDD<Tuple2<InputType0, InputType1>> crossProduct = cartesian
      .map(scalaTuple -> new Tuple2<>(scalaTuple._1, scalaTuple._2));
  this.name(crossProduct);
  output.accept(crossProduct, sparkExecutor);
  return ExecutionOperator.modelLazyExecution(inputs, outputs, operatorContext);
}

代码示例来源:origin: org.qcri.rheem/rheem-iejoin

.cartesian(rdd1TinyObjects)
.filter(new filterUnwantedBlocks<Type0, Type1>(this.cond0,
    list2ASC)).mapToPair(in -> new Tuple2<Long, Long>(in._1()._1(), in._2()._1()));

代码示例来源:origin: org.qcri.rheem/rheem-iejoin

.cartesian(rdd2TinyObjects)
.filter(new filterUnwantedBlocks<Type0, Type1>(cond0,
    list2ASC)).mapToPair(in -> new Tuple2<Long, Long>(in._1()._1(), in._2()._1()));

相关文章

微信公众号

最新文章

更多