使用kafka和spark的大数据

ttcibm8c  于 2021-06-08  发布在  Kafka
关注(0)|答案(1)|浏览(346)

我有一个json格式的数据流,它由websocket提供,大小在每秒1mb到60MB之间变化。
我先对数据进行解码,然后对其进行解析,最后写入mysql。
我有两个想法:
1) 从socket中读取数据,然后在producer中解码并通过avro发送给consumer,然后在spark map上获取数据并写入mysql,从而减少consumer的开销
2) 从socket读取数据,然后将数据发送到producer中的consumer,然后在consumer中获取数据,然后在spark上解码,并将解析后的数据发送到spark job中写入mysql。
你知道吗?
制作人

/*
 * To change this license header, choose License Headers in Project Properties.
 * To change this template file, choose Tools | Templates
 * and open the template in the editor.
 */
package com.tan;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.ProducerConfig;

import java.util.Properties;

import java.util.stream.Stream;

import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
/**
 *
 * @author Tan
 */
public class MainKafkaProducer {

    /**
     * @param args the command line arguments
     */
    public static void main(String[] args) throws InterruptedException {
        // TODO code application logic here
        Properties props = new Properties();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.ByteArraySerializer");
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");

        //props.put("group.id", "mygroup");
        //props.put("max.partition.fetch.bytes", "100000000");
        //props.put("serializer.class", "kafka.serializer.StringEncoder");
        //props.put("partitioner.class","kafka.producer.DefaultPartitioner");
        //props.put("request.required.acks", "1");

         KafkaProducer<String, String> producer = new KafkaProducer<>(props);

         // Read the data from websocket and send it to consumer
         //for (int i = 0; i < 100; i++) {
            String fileName = "/Users/Tan/Desktop/feed.json";
            try{
                BufferedReader file = new BufferedReader(new FileReader(fileName));
                String st = file.readLine();
                for(int i = 0; i < 100; i++)
                {
                    ProducerRecord<String, String> record = new ProducerRecord<>("mytopic", st);
                    producer.send(record);
                }
            }catch(IOException e){
                e.printStackTrace();
            }
        //}

        /*
        for(int i = 0; i < 100; i++)
        {
            ProducerRecord<String, String> record2 = new ProducerRecord<>("mytopic", "Hasan-" + i);
            producer.send(record2);
        }
        */

        producer.close();
    }

}

消费者

/*
 * To change this license header, choose License Headers in Project Properties.
 * To change this template file, choose Tools | Templates
 * and open the template in the editor.
 */
package com.tan;

import kafka.serializer.DefaultDecoder;
import kafka.serializer.StringDecoder;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.streaming.Duration;
import org.apache.spark.streaming.api.java.JavaPairInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.streaming.kafka.KafkaUtils;

import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import java.util.Set;
/**
 *
 * @author Tan
 */
public class MainKafkaConsumer {
    /**
     * @param args the command line arguments
     */
    public static void main(String[] args) {

        SparkConf conf = new SparkConf()
                .setAppName(MainKafkaConsumer.class.getName())
                .setMaster("local[*]");
        JavaSparkContext sc = new JavaSparkContext(conf);
        JavaStreamingContext ssc = new JavaStreamingContext(sc, new Duration(2000));

        Set<String> topics = Collections.singleton("mytopic");
        Map<String, String> kafkaParams = new HashMap<>();
        kafkaParams.put("metadata.broker.list", "localhost:9092");

        JavaPairInputDStream<String, String> directKafkaStream = KafkaUtils.createDirectStream(ssc, 
                String.class, String.class, 
                StringDecoder.class, StringDecoder.class, 
                kafkaParams, topics);

        directKafkaStream.foreachRDD(rdd -> {

            rdd.foreach(records -> {

                System.out.println(records._2);

            });

        });
        /*
        directKafkaStream.foreachRDD(rdd -> {
            System.out.println("--- New RDD with " + rdd.partitions().size()
                    + " partitions and " + rdd.count() + " records");
            rdd.foreach(record -> {
                System.out.println(record._2);
            });
        });
        */

        ssc.start();
        ssc.awaitTermination();

    }

}
w6lpcovy

w6lpcovy1#

你的过程很好,重点是avro转换。你的数据没有那么大,1mb到60mb。
这里我有一个类似的过程,从一个mq读取数据,处理数据,转换成avro,发送到kafka,从kafka消费,解析数据并在其他mq中发布。
当我们的数据很大时,比如大于等于1gb时,avro帮助很大。但在某些情况下,我们的数据非常小,比如<10mb。在这种情况下,avro使我们的处理有点慢,在网络传输中没有增益。
我给你的建议是,如果你的网络足够好,不能转换成avro,最好不用avro。为了提高spark端的性能,请使用大量的分区来配置kafka主题,因为如果只有一个分区,spark将无法正确地进行并行化。检查这段文字可以帮助你。

相关问题