sql代码在sparkshell上运行时崩溃

dwthyt8l  于 2021-07-14  发布在  Spark
关注(0)|答案(0)|浏览(295)

我试图在sparkshell上以交互模式运行一个简单的sql查询。我做所有的导入:#spark shell scala>

import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.sql.functions
import java.time.LocalDateTime
import java.time.format.DateTimeFormatter
import org.apache.spark.sql.functions._

val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
import hiveContext._

val results1 = hiveContext.sql("FROM table1 select col1 ")
results1.show()

sql语句出错:

scala.reflect.internal.Types$TypeError: bad symbolic reference. A signature in HiveMetastoreCatalog.class refers to term cache
in package com.google.common which is not available.
It may be completely missing from the current classpath, or the version on
the classpath might be incompatible with the version used when compiling HiveMetastoreCatalog.class.
That entry seems to have slain the compiler.  Shall I replay
your session? I can re-run each line except the last one.
[y/n]

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题