scala.collection.JavaConverters.collectionAsScalaIterableConverter()方法的使用及代码示例

x33g5p2x  于2022-01-21 转载在 其他  
字(10.6k)|赞(0)|评价(0)|浏览(129)

本文整理了Java中scala.collection.JavaConverters.collectionAsScalaIterableConverter()方法的一些代码示例,展示了JavaConverters.collectionAsScalaIterableConverter()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。JavaConverters.collectionAsScalaIterableConverter()方法的具体详情如下:
包路径:scala.collection.JavaConverters
类名称:JavaConverters
方法名:collectionAsScalaIterableConverter

JavaConverters.collectionAsScalaIterableConverter介绍

暂无

代码示例

代码示例来源:origin: org.apache.spark/spark-core_2.11

@Test
public void combineByKey() {
 JavaRDD<Integer> originalRDD = sc.parallelize(Arrays.asList(1, 2, 3, 4, 5, 6));
 Function<Integer, Integer> keyFunction = v1 -> v1 % 3;
 Function<Integer, Integer> createCombinerFunction = v1 -> v1;
 Function2<Integer, Integer, Integer> mergeValueFunction = (v1, v2) -> v1 + v2;
 JavaPairRDD<Integer, Integer> combinedRDD = originalRDD.keyBy(keyFunction)
  .combineByKey(createCombinerFunction, mergeValueFunction, mergeValueFunction);
 Map<Integer, Integer> results = combinedRDD.collectAsMap();
 ImmutableMap<Integer, Integer> expected = ImmutableMap.of(0, 9, 1, 5, 2, 7);
 assertEquals(expected, results);
 Partitioner defaultPartitioner = Partitioner.defaultPartitioner(
  combinedRDD.rdd(),
  JavaConverters.collectionAsScalaIterableConverter(
   Collections.<RDD<?>>emptyList()).asScala().toSeq());
 combinedRDD = originalRDD.keyBy(keyFunction)
  .combineByKey(
   createCombinerFunction,
   mergeValueFunction,
   mergeValueFunction,
   defaultPartitioner,
   false,
   new KryoSerializer(new SparkConf()));
 results = combinedRDD.collectAsMap();
 assertEquals(expected, results);
}

代码示例来源:origin: org.apache.spark/spark-core_2.10

@Test
public void combineByKey() {
 JavaRDD<Integer> originalRDD = sc.parallelize(Arrays.asList(1, 2, 3, 4, 5, 6));
 Function<Integer, Integer> keyFunction = v1 -> v1 % 3;
 Function<Integer, Integer> createCombinerFunction = v1 -> v1;
 Function2<Integer, Integer, Integer> mergeValueFunction = (v1, v2) -> v1 + v2;
 JavaPairRDD<Integer, Integer> combinedRDD = originalRDD.keyBy(keyFunction)
  .combineByKey(createCombinerFunction, mergeValueFunction, mergeValueFunction);
 Map<Integer, Integer> results = combinedRDD.collectAsMap();
 ImmutableMap<Integer, Integer> expected = ImmutableMap.of(0, 9, 1, 5, 2, 7);
 assertEquals(expected, results);
 Partitioner defaultPartitioner = Partitioner.defaultPartitioner(
  combinedRDD.rdd(),
  JavaConverters.collectionAsScalaIterableConverter(
   Collections.<RDD<?>>emptyList()).asScala().toSeq());
 combinedRDD = originalRDD.keyBy(keyFunction)
  .combineByKey(
   createCombinerFunction,
   mergeValueFunction,
   mergeValueFunction,
   defaultPartitioner,
   false,
   new KryoSerializer(new SparkConf()));
 results = combinedRDD.collectAsMap();
 assertEquals(expected, results);
}

代码示例来源:origin: org.apache.spark/spark-core

@Test
public void combineByKey() {
 JavaRDD<Integer> originalRDD = sc.parallelize(Arrays.asList(1, 2, 3, 4, 5, 6));
 Function<Integer, Integer> keyFunction = v1 -> v1 % 3;
 Function<Integer, Integer> createCombinerFunction = v1 -> v1;
 Function2<Integer, Integer, Integer> mergeValueFunction = (v1, v2) -> v1 + v2;
 JavaPairRDD<Integer, Integer> combinedRDD = originalRDD.keyBy(keyFunction)
  .combineByKey(createCombinerFunction, mergeValueFunction, mergeValueFunction);
 Map<Integer, Integer> results = combinedRDD.collectAsMap();
 ImmutableMap<Integer, Integer> expected = ImmutableMap.of(0, 9, 1, 5, 2, 7);
 assertEquals(expected, results);
 Partitioner defaultPartitioner = Partitioner.defaultPartitioner(
  combinedRDD.rdd(),
  JavaConverters.collectionAsScalaIterableConverter(
   Collections.<RDD<?>>emptyList()).asScala().toSeq());
 combinedRDD = originalRDD.keyBy(keyFunction)
  .combineByKey(
   createCombinerFunction,
   mergeValueFunction,
   mergeValueFunction,
   defaultPartitioner,
   false,
   new KryoSerializer(new SparkConf()));
 results = combinedRDD.collectAsMap();
 assertEquals(expected, results);
}

代码示例来源:origin: com.typesafe.play/play

/**
 * Converts a Java Collection to a Scala Seq.
 *
 * @param javaCollection the java collection
 * @param <A>            the type of Seq element
 * @return the scala Seq.
 */
public static <A> scala.collection.immutable.Seq<A> asScala(Collection<A> javaCollection) {
  return scala.collection.JavaConverters.collectionAsScalaIterableConverter(javaCollection).asScala().toList();
}

代码示例来源:origin: com.typesafe.play/play_2.10

/**
 * Converts a Java Collection to a Scala Seq.
 */
public static <A> scala.collection.immutable.Seq<A> asScala(Collection<A> javaCollection) {
  return scala.collection.JavaConverters.collectionAsScalaIterableConverter(javaCollection).asScala().toList();
}

代码示例来源:origin: com.typesafe.play/play_2.11

/**
 * Converts a Java Collection to a Scala Seq.
 *
 * @param javaCollection the java collection
 * @param <A>            the type of Seq element
 * @return the scala Seq.
 */
public static <A> scala.collection.immutable.Seq<A> asScala(Collection<A> javaCollection) {
  return scala.collection.JavaConverters.collectionAsScalaIterableConverter(javaCollection).asScala().toList();
}

代码示例来源:origin: com.typesafe.play/play_2.12

/**
 * Converts a Java Collection to a Scala Seq.
 *
 * @param javaCollection the java collection
 * @param <A>            the type of Seq element
 * @return the scala Seq.
 */
public static <A> scala.collection.immutable.Seq<A> asScala(Collection<A> javaCollection) {
  return scala.collection.JavaConverters.collectionAsScalaIterableConverter(javaCollection).asScala().toList();
}

代码示例来源:origin: com.typesafe.play/play_2.12

@Override
public final Seq<play.api.inject.Binding<?>> bindings(final play.api.Environment environment,
    final play.api.Configuration configuration) {
  List<play.api.inject.Binding<?>> list = bindings(environment.asJava(), configuration.underlying()).stream()
    .map(Binding::asScala)
    .collect(Collectors.toList());
  return JavaConverters.collectionAsScalaIterableConverter(list).asScala().toList();
}

代码示例来源:origin: com.typesafe.play/play

@Override
public final Seq<play.api.inject.Binding<?>> bindings(final play.api.Environment environment,
    final play.api.Configuration configuration) {
  List<play.api.inject.Binding<?>> list = bindings(environment.asJava(), configuration.underlying()).stream()
    .map(Binding::asScala)
    .collect(Collectors.toList());
  return JavaConverters.collectionAsScalaIterableConverter(list).asScala().toList();
}

代码示例来源:origin: com.typesafe.play/play_2.11

@Override
public final Seq<play.api.inject.Binding<?>> bindings(final play.api.Environment environment,
    final play.api.Configuration configuration) {
  List<play.api.inject.Binding<?>> list = bindings(environment.asJava(), configuration.underlying()).stream()
    .map(Binding::asScala)
    .collect(Collectors.toList());
  return JavaConverters.collectionAsScalaIterableConverter(list).asScala().toList();
}

代码示例来源:origin: com.typesafe.akka/akka-http-core

private static scala.collection.Iterable<HttpHeader> toScalaSeq(java.util.List<HttpHeader> _headers) {
    return scala.collection.JavaConverters.collectionAsScalaIterableConverter(_headers).asScala();
  }
}

代码示例来源:origin: com.typesafe.akka/akka-http-core_2.11

private static scala.collection.Iterable<HttpHeader> toScalaSeq(java.util.List<HttpHeader> _headers) {
    return scala.collection.JavaConverters.collectionAsScalaIterableConverter(_headers).asScala();
  }
}

代码示例来源:origin: com.typesafe.akka/akka-http-core_2.12

private static scala.collection.Iterable<HttpHeader> toScalaSeq(java.util.List<HttpHeader> _headers) {
    return scala.collection.JavaConverters.collectionAsScalaIterableConverter(_headers).asScala();
  }
}

代码示例来源:origin: com.sandinh/play-alone

/**
 * Converts a Java Collection to a Scala Seq.
 */
public static <A> scala.collection.immutable.Seq<A> asScala(Collection<A> javaCollection) {
  return scala.collection.JavaConverters.collectionAsScalaIterableConverter(javaCollection).asScala().toList();
}

代码示例来源:origin: com.twitter/util-core_2.11

/**
 * @see Offer$#choose(scala.collection.Seq)
 */
public static <T> Offer<T> choose(Collection<Offer<T>> offers) {
 scala.collection.Seq<Offer<T>> scalaSeq = JavaConverters.collectionAsScalaIterableConverter(offers).asScala().toSeq();
 return Offer$.MODULE$.choose(scalaSeq);
}

代码示例来源:origin: com.twitter/util-core_2.12

/**
 * @see Offer$#choose(scala.collection.Seq)
 */
public static <T> Offer<T> choose(Collection<Offer<T>> offers) {
 scala.collection.Seq<Offer<T>> scalaSeq = JavaConverters.collectionAsScalaIterableConverter(offers).asScala().toSeq();
 return Offer$.MODULE$.choose(scalaSeq);
}

代码示例来源:origin: cloudera-labs/envelope

@Override
public Dataset<Row> derive(Map<String, Dataset<Row>> dependencies) throws Exception {
 dependencyCheck(dependencies);
 Dataset<Row> sourceStep = dependencies.get(stepName);
 if (useIncludeFields){
   if (!Arrays.asList(sourceStep.columns()).containsAll(includeFields)){
     throw new RuntimeException("Columns specified in " + INCLUDE_FIELDS + " are not found in input dependency schema \n" +
     "Available columns: " + Arrays.toString(sourceStep.columns()));
   }
   String firstCol = includeFields.get(0);
   includeFields.remove(0);
   return sourceStep.select(firstCol, includeFields.toArray(new String[0]));
 } else {
   if (!Arrays.asList(sourceStep.columns()).containsAll(excludeFields)){
     throw new RuntimeException("Columns specified in " + EXCLUDE_FIELDS + " are not found in input dependency schema \n" +
     "Available columns: " + Arrays.toString(sourceStep.columns()));
   }
   return sourceStep.drop(JavaConverters.collectionAsScalaIterableConverter(excludeFields).asScala().toSeq());
 }
}

代码示例来源:origin: org.apache.spark/spark-streaming-kafka-0-10

final Collection<String> topics = Arrays.asList(topic1);
final scala.collection.Iterable<String> sTopics =
 JavaConverters.collectionAsScalaIterableConverter(topics).asScala();
final TopicPartition tp1 = new TopicPartition(topic1, 0);
final TopicPartition tp2 = new TopicPartition(topic1, 1);
final Collection<TopicPartition> parts = Arrays.asList(tp1, tp2);
final scala.collection.Iterable<TopicPartition> sParts =
 JavaConverters.collectionAsScalaIterableConverter(parts).asScala();
final Map<String, Object> kafkaParams = new HashMap<String, Object>();
kafkaParams.put("bootstrap.servers", "not used");

代码示例来源:origin: org.apache.spark/spark-streaming-kafka-0-10_2.11

final Collection<String> topics = Arrays.asList(topic1);
final scala.collection.Iterable<String> sTopics =
 JavaConverters.collectionAsScalaIterableConverter(topics).asScala();
final TopicPartition tp1 = new TopicPartition(topic1, 0);
final TopicPartition tp2 = new TopicPartition(topic1, 1);
final Collection<TopicPartition> parts = Arrays.asList(tp1, tp2);
final scala.collection.Iterable<TopicPartition> sParts =
 JavaConverters.collectionAsScalaIterableConverter(parts).asScala();
final Map<String, Object> kafkaParams = new HashMap<String, Object>();
kafkaParams.put("bootstrap.servers", "not used");

相关文章