本文整理了Java中org.apache.spark.streaming.kafka.KafkaUtils.createRDD()
方法的一些代码示例,展示了KafkaUtils.createRDD()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。KafkaUtils.createRDD()
方法的具体详情如下:
包路径:org.apache.spark.streaming.kafka.KafkaUtils
类名称:KafkaUtils
方法名:createRDD
暂无
代码示例来源:origin: uber/hudi
@Override
protected JavaRDD<GenericRecord> toAvroRDD(OffsetRange[] offsetRanges, AvroConvertor avroConvertor) {
return KafkaUtils.createRDD(sparkContext, String.class, String.class, StringDecoder.class, StringDecoder.class,
kafkaParams, offsetRanges)
.values().map(avroConvertor::fromJson);
}
}
代码示例来源:origin: uber/hudi
@Override
protected JavaRDD<GenericRecord> toAvroRDD(OffsetRange[] offsetRanges, AvroConvertor avroConvertor) {
return KafkaUtils
.createRDD(sparkContext, String.class, Object.class, StringDecoder.class, KafkaAvroDecoder.class, kafkaParams,
offsetRanges).values().map(obj -> (GenericRecord) obj);
}
}
代码示例来源:origin: com.uber.hoodie/hoodie-utilities
@Override
protected JavaRDD<GenericRecord> toAvroRDD(OffsetRange[] offsetRanges, AvroConvertor avroConvertor) {
JavaRDD<GenericRecord> recordRDD = KafkaUtils
.createRDD(sparkContext, String.class, Object.class, StringDecoder.class, KafkaAvroDecoder.class, kafkaParams,
offsetRanges).values().map(obj -> (GenericRecord) obj);
return recordRDD;
}
}
代码示例来源:origin: com.uber.hoodie/hoodie-utilities
@Override
protected JavaRDD<GenericRecord> toAvroRDD(OffsetRange[] offsetRanges, AvroConvertor avroConvertor) {
return KafkaUtils.createRDD(sparkContext, String.class, String.class, StringDecoder.class, StringDecoder.class,
kafkaParams, offsetRanges)
.values().map(jsonStr -> avroConvertor.fromJson(jsonStr));
}
}
代码示例来源:origin: org.apache.spark/spark-streaming-kafka_2.10
leaders.put(new TopicAndPartition(topic2, 0), broker);
JavaRDD<String> rdd1 = KafkaUtils.createRDD(
sc,
String.class,
JavaRDD<String> rdd2 = KafkaUtils.createRDD(
sc,
String.class,
JavaRDD<String> rdd3 = KafkaUtils.createRDD(
sc,
String.class,
代码示例来源:origin: org.apache.spark/spark-streaming-kafka
leaders.put(new TopicAndPartition(topic2, 0), broker);
JavaRDD<String> rdd1 = KafkaUtils.createRDD(
sc,
String.class,
JavaRDD<String> rdd2 = KafkaUtils.createRDD(
sc,
String.class,
JavaRDD<String> rdd3 = KafkaUtils.createRDD(
sc,
String.class,
代码示例来源:origin: org.apache.spark/spark-streaming-kafka_2.11
leaders.put(new TopicAndPartition(topic2, 0), broker);
JavaRDD<String> rdd1 = KafkaUtils.createRDD(
sc,
String.class,
JavaRDD<String> rdd2 = KafkaUtils.createRDD(
sc,
String.class,
JavaRDD<String> rdd3 = KafkaUtils.createRDD(
sc,
String.class,
代码示例来源:origin: org.apache.spark/spark-streaming-kafka-0-8_2.11
leaders.put(new TopicAndPartition(topic2, 0), broker);
JavaRDD<String> rdd1 = KafkaUtils.createRDD(
sc,
String.class,
JavaRDD<String> rdd2 = KafkaUtils.createRDD(
sc,
String.class,
JavaRDD<String> rdd3 = KafkaUtils.createRDD(
sc,
String.class,
代码示例来源:origin: org.apache.spark/spark-streaming-kafka-0-8
leaders.put(new TopicAndPartition(topic2, 0), broker);
JavaRDD<String> rdd1 = KafkaUtils.createRDD(
sc,
String.class,
JavaRDD<String> rdd2 = KafkaUtils.createRDD(
sc,
String.class,
JavaRDD<String> rdd3 = KafkaUtils.createRDD(
sc,
String.class,
内容来源于网络,如有侵权,请联系作者删除!