spark.sql.catalog在内存和配置单元中的实现

daupos2t  于 2021-05-27  发布在  Spark
关注(0)|答案(0)|浏览(415)

我正在查看文档,似乎spark.sql.catalogimplementation的有效值是 in-memory 以及 hive . 当我启动sparksql时,它以“内存中”开始,但当我保存表时,它被存储在hivemetastore中。
有很多困惑。
spark sql的默认目录是什么
inmemory与hive有何不同(我希望hive可能指的是hive metastore和inmemory的名称ram?)
当我列出目录中的表时,它显示从当前会话创建的表,而不是所有的表?
以下是我所看到的:

scala> Seq((1, 2)).toDF("i", "j").write.mode("overwrite").saveAsTable("t2")

scala>

scala> sql("set spark.sql.catalogImplementation").show(false)
+-------------------------------+---------+
|key                            |value    |
+-------------------------------+---------+
|spark.sql.catalogImplementation|in-memory|
+-------------------------------+---------+

scala>

scala> spark.catalog.listTables().show()
+----+--------+-----------+---------+-----------+
|name|database|description|tableType|isTemporary|
+----+--------+-----------+---------+-----------+
|  t2| default|       null|  MANAGED|      false|
+----+--------+-----------+---------+-----------+

scala> :quit
(base) [hands-on@localhost ~]$ spark-shell
Spark context Web UI available at http://localhost:4040
Spark context available as 'sc' (master = local[*], app id = local-1595928466572).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.4.5
      /_/

Using Scala version 2.11.12 (OpenJDK 64-Bit Server VM, Java 1.8.0_232)
Type in expressions to have them evaluated.
Type :help for more information.

scala> spark.catalog.listTables().show()
+----+--------+-----------+---------+-----------+
|name|database|description|tableType|isTemporary|
+----+--------+-----------+---------+-----------+
+----+--------+-----------+---------+-----------+

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题