org.apache.spark.streaming.kafka.KafkaUtils.createRDD()方法的使用及代码示例

x33g5p2x  于2022-01-23 转载在 其他  
字(3.1k)|赞(0)|评价(0)|浏览(368)

本文整理了Java中org.apache.spark.streaming.kafka.KafkaUtils.createRDD()方法的一些代码示例,展示了KafkaUtils.createRDD()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。KafkaUtils.createRDD()方法的具体详情如下:
包路径:org.apache.spark.streaming.kafka.KafkaUtils
类名称:KafkaUtils
方法名:createRDD

KafkaUtils.createRDD介绍

暂无

代码示例

代码示例来源:origin: uber/hudi

@Override
 protected JavaRDD<GenericRecord> toAvroRDD(OffsetRange[] offsetRanges, AvroConvertor avroConvertor) {
  return KafkaUtils.createRDD(sparkContext, String.class, String.class, StringDecoder.class, StringDecoder.class,
    kafkaParams, offsetRanges)
    .values().map(avroConvertor::fromJson);
 }
}

代码示例来源:origin: uber/hudi

@Override
 protected JavaRDD<GenericRecord> toAvroRDD(OffsetRange[] offsetRanges, AvroConvertor avroConvertor) {
  return KafkaUtils
    .createRDD(sparkContext, String.class, Object.class, StringDecoder.class, KafkaAvroDecoder.class, kafkaParams,
      offsetRanges).values().map(obj -> (GenericRecord) obj);
 }
}

代码示例来源:origin: com.uber.hoodie/hoodie-utilities

@Override
 protected JavaRDD<GenericRecord> toAvroRDD(OffsetRange[] offsetRanges, AvroConvertor avroConvertor) {
  JavaRDD<GenericRecord> recordRDD = KafkaUtils
    .createRDD(sparkContext, String.class, Object.class, StringDecoder.class, KafkaAvroDecoder.class, kafkaParams,
      offsetRanges).values().map(obj -> (GenericRecord) obj);
  return recordRDD;
 }
}

代码示例来源:origin: com.uber.hoodie/hoodie-utilities

@Override
 protected JavaRDD<GenericRecord> toAvroRDD(OffsetRange[] offsetRanges, AvroConvertor avroConvertor) {
  return KafkaUtils.createRDD(sparkContext, String.class, String.class, StringDecoder.class, StringDecoder.class,
    kafkaParams, offsetRanges)
    .values().map(jsonStr -> avroConvertor.fromJson(jsonStr));
 }
}

代码示例来源:origin: org.apache.spark/spark-streaming-kafka_2.10

leaders.put(new TopicAndPartition(topic2, 0), broker);
JavaRDD<String> rdd1 = KafkaUtils.createRDD(
  sc,
  String.class,
JavaRDD<String> rdd2 = KafkaUtils.createRDD(
  sc,
  String.class,
JavaRDD<String> rdd3 = KafkaUtils.createRDD(
  sc,
  String.class,

代码示例来源:origin: org.apache.spark/spark-streaming-kafka

leaders.put(new TopicAndPartition(topic2, 0), broker);
JavaRDD<String> rdd1 = KafkaUtils.createRDD(
  sc,
  String.class,
JavaRDD<String> rdd2 = KafkaUtils.createRDD(
  sc,
  String.class,
JavaRDD<String> rdd3 = KafkaUtils.createRDD(
  sc,
  String.class,

代码示例来源:origin: org.apache.spark/spark-streaming-kafka_2.11

leaders.put(new TopicAndPartition(topic2, 0), broker);
JavaRDD<String> rdd1 = KafkaUtils.createRDD(
  sc,
  String.class,
JavaRDD<String> rdd2 = KafkaUtils.createRDD(
  sc,
  String.class,
JavaRDD<String> rdd3 = KafkaUtils.createRDD(
  sc,
  String.class,

代码示例来源:origin: org.apache.spark/spark-streaming-kafka-0-8_2.11

leaders.put(new TopicAndPartition(topic2, 0), broker);
JavaRDD<String> rdd1 = KafkaUtils.createRDD(
  sc,
  String.class,
JavaRDD<String> rdd2 = KafkaUtils.createRDD(
  sc,
  String.class,
JavaRDD<String> rdd3 = KafkaUtils.createRDD(
  sc,
  String.class,

代码示例来源:origin: org.apache.spark/spark-streaming-kafka-0-8

leaders.put(new TopicAndPartition(topic2, 0), broker);
JavaRDD<String> rdd1 = KafkaUtils.createRDD(
  sc,
  String.class,
JavaRDD<String> rdd2 = KafkaUtils.createRDD(
  sc,
  String.class,
JavaRDD<String> rdd3 = KafkaUtils.createRDD(
  sc,
  String.class,

相关文章

微信公众号

最新文章

更多