delta-lake:org.apache.spark.sql.analysisexception:未找到表或视图:default.history;;

a1o7rhls  于 2021-07-13  发布在  Spark
关注(0)|答案(0)|浏览(159)

技术堆栈-
Spark:3.1.0
增量:0.7.0
斯卡拉:2.12.8
我正在创建delta表,如-(在aws emr上,即emr-6.2.0)

val spark = SparkSession.builder()
              .appName("Spark3_Delta_Dema")
              .master("local[*]")
              .config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension")
              .config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")
              .getOrCreate()

spark.sql("CREATE OR REPLACE TABLE delta_table_name ( column1 STRING, column2 Binary, column3 BIGINT) USING DELTA PARTITIONED BY (column3) LOCATION 's3a://...........'")

正确创建表。
但当我试图运行describe history查询时,问题来了。
我试过了-
描述历史.s3位置
描述历史delta.delta\u表名称,
描述历史增量表名称,
描述历史s3位置
但所有这些都导致了同样的问题。
错误日志-

An error was encountered:
org.apache.spark.sql.AnalysisException: Table or view not found: default.history;;
'UnresolvedRelation [default, history]

  at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
  at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$1(CheckAnalysis.scala:106)
  at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$1$adapted(CheckAnalysis.scala:92)
  at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:188)
  at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis(CheckAnalysis.scala:92)
  at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis$(CheckAnalysis.scala:89)
  at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:130)
  at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:156)
  at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)
  at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:153)
  at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:68)
  at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:149)
  at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:153)
  at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
  at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:153)
  at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:68)
  at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:66)
  at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:58)
  at org.apache.spark.sql.Dataset$.$anonfun$ofRows$1(Dataset.scala:92)
  at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
  at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:90)
  at org.apache.spark.sql.SparkSession.table(SparkSession.scala:589)
  at org.apache.spark.sql.execution.command.DescribeColumnCommand.run(tables.scala:773)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
  at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:230)
  at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3667)
  at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:104)
  at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:227)
  at org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:107)
  at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:132)
  at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:104)
  at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:227)
  at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:132)
  at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:248)
  at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:131)
  at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
  at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
  at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3665)
  at org.apache.spark.sql.Dataset.<init>(Dataset.scala:230)
  at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:101)
  at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
  at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:98)
  at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:607)
  at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
  at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:602)
  ... 53 elided

更新:-
当我这样做的时候->

val deltaTable = DeltaTable.forPath(spark1,"s3a:..................")
val fullHistory = deltaTable.history()
fullHistory.show()

然后我就能看到结果了。
提前谢谢。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题