value tail不是(string,string)的成员

1hdlvixo  于 2021-06-02  发布在  Hadoop
关注(0)|答案(1)|浏览(254)

我正在使用spark shell。我在kafka主题中存储了tweets,以便使用spark shell执行情绪分析。
我添加了依赖项:org.apache。spark:spark-streaming-kafka_2.10:1.6.2斯坦福大学。nlp:stanford-corenlp:3.5.1
以下是我正在处理的代码:

import org.apache.spark._
import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.Seconds 
import org.apache.spark.streaming.kafka._
val conf = new SparkConf().setMaster("local[4]").setAppName("KafkaReceiver")
val ssc = new StreamingContext(conf, Seconds(5))
val kafkaStream = KafkaUtils.createStream(ssc, "sandbox.hortonworks.com:2181","test-consumer-group", Map("test12" -> 5))
val topCounts60 = kafkaStream.map((_, 1)).reduceByKeyAndWindow(_ + _, Seconds(60)).map { case (topic, count) => (count, topic) }.transform(_.sortByKey(false))
  topCounts60.foreachRDD(rdd => {
      val topList = rdd.take(10)
      println("\nPopular topics in last 60 seconds (%s total):".format(rdd.count()))
      topList.foreach { case (count, tag) => println("%s (%s tweets)".format(tag, count)) }
    })
kafkaStream.count().map(cnt => "Received " + cnt + " kafka messages.").print()
val wordSentimentFilePath = "hdfs://sandbox.hortonworks.com:8020/TwitterData/AFINN.txt"
    val wordSentiments = ssc.sparkContext.textFile(wordSentimentFilePath).map { line =>
    val Array(word, happiness) = line.split("\t")
    (word, happiness)
    } cache()
val happiest60 = kafkaStream.map(hashTag => (hashTag.tail, 1)).reduceByKeyAndWindow(_ + _, Seconds(60)). transform{topicCount => wordSentiments.join(topicCount)}
                .map{case (topic, tuple) => (topic, tuple._1 * tuple._2)}.map{case (topic, happinessValue) => (happinessValue, topic)}.transform(_.sortByKey(false))
ssc.start()
ssc.stop()

但在执行这些命令时,

val happiest60 = kafkaStream.map(hashTag => (hashTag.tail,1)).reduceByKeyAndWindow(_ + _, Seconds(60)). transform{topicCount => wordSentiments.join(topicCount)}.map{case (topic, tuple) => (topic, tuple._1 * tuple._2)}.map{case (topic, happinessValue) => (happinessValue, topic)}.transform(_.sortByKey(false))

它抛出错误:
错误:值尾不是(string,string)的成员

wljmcqd8

wljmcqd81#

可能是标签的类型 (String, String) ,因此未定义尾部操作。 tail 是在集合上定义的函数,而不是在元组上定义的函数。 map 操作对从流接收的单个项进行操作。如果kafka流包含类型为 (String, String) ,这很正常。

相关问题