hive 尝试在Spark SQL查询上执行join时获取scala.MatchError

w3nuxt5m  于 12个月前  发布在  Hive
关注(0)|答案(1)|浏览(119)

我有一个查询,我试图通过spark sql执行。当我单独执行表结果时,我可以看到结果。但是当我加入他们时,我得到了scalaMatch错误

我使用的查询:

query = '''
    WITH
      a1 AS (
         SELECT val_in_min, val_date
           FROM table1
      ),
      a2 AS (
         SELECT val_in_sec, val_date
           FROM table2
      )
    SELECT a.val_in_min, a.val_date, b.val_in_sec
      FROM a1 a
      JOIN a2 b
        ON a.val_date=b.val_date
'''
spark.sql(query).show()

*表1样本值

|val_date  |val_in_min |
|----------|-----------|
|2023-05-01|   11.4833 |
|2023-05-02|   9.90000 |

*表2样本值

|val_date  |val_in_sec |
|----------|-----------|
|2023-05-01|   23      |
|2023-05-02|   26      |

*错误信息

An error was encountered:
An error occurred while calling o120.showString.
: scala.MatchError: list#77341 [] (of class org.apache.spark.sql.catalyst.expressions.ListQuery)
    at org.apache.spark.sql.catalyst.optimizer.SizeBasedJoinReorder$.hasFilterPredicate(SizeBasedJoinReorder.scala:285)
    at scala.collection.LinearSeqOptimized.exists(LinearSeqOptimized.scala:95)
    at scala.collection.LinearSeqOptimized.exists$(LinearSeqOptimized.scala:92)
    at scala.collection.immutable.List.exists(List.scala:89)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:750)

Traceback (most recent call last):
  File "/mnt/yarn/usercache/appcache/application_1686133156822_0013/container_1686133156822_0013_01_000001/pyspark.zip/pyspark/sql/dataframe.py", line 441, in show
    print(self._jdf.showString(n, 20, vertical))
  File "/mnt/yarn/usercache/appcache/application_1686133156822_0013/container_1686133156822_0013_01_000001/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "/mnt/yarn/usercache/appcache/application_1686133156822_0013/container_1686133156822_0013_01_000001/pyspark.zip/pyspark/sql/utils.py", line 128, in deco
    return f(*a, **kw)
  File "/mnt/yarn/usercache/appcache/application_1686133156822_0013/container_1686133156822_0013_01_000001/py4j-0.10.9-src.zip/py4j/protocol.py", line 328, in get_return_value
    format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o120.showString.
: scala.MatchError: list#77341 [] (of class org.apache.spark.sql.catalyst.expressions.ListQuery)
    at org.apache.spark.sql.catalyst.optimizer.SizeBasedJoinReorder$.hasFilterPredicate(SizeBasedJoinReorder.scala:285)
    at org.apache.spark.sql.catalyst.optimizer.SizeBasedJoinReorder$.$anonfun$hasFilterPredicate$1(SizeBasedJoinReorder.scala:293)
    at org.apache.spark.sql.catalyst.optimizer.SizeBasedJoinReorder$.$anonfun$hasFilterPredicate$1$adapted(SizeBasedJoinReorder.scala:293)
    at scala.collection.LinearSeqOptimized.exists(LinearSeqOptimized.scala:95)
    at scala.collection.LinearSeqOptimized.exists$(LinearSeqOptimized.scala:92)
    at scala.collection.immutable.List.exists(List.scala:89)

我尝试更改连接表,但仍然收到此错误。请告诉我可以在Spark SQL中修改什么来修复这个错误。
Spark版本为3.0.1,在Amazon AWS上运行

huwehgph

huwehgph1#

我添加了"spark.sql.optimizer.sizeBasedJoinReorder.enabled": "false"作为Spark配置的一部分,正如@mazaneicha所建议的那样,Spark SQL工作正常。谢谢你的帮助

文件:

https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-performance.html

相关问题