如何处理从azure synapse表到spark databricks的varchar not null列中的空格

jdg4fx2g  于 2021-05-24  发布在  Spark
关注(0)|答案(0)|浏览(271)

当我从synapse数据库读取spark中的表(使用azuredatabricks)时,我遇到了一个问题。该表定义如下:

CREATE TABLE A
(
    [ID] [int] NOT NULL,
    [Value] [int] NOT NULL,
    [Description] [nvarchar](30) NOT NULL,

)

田野 Description 可以为空(即。 "" )或者可以包含空格。在synapse中,我对这个字段没有任何问题,当我读取spark将其放入Dataframe的表时也没有问题。当我写这样的东西时,问题就出现了 df.show() 或者 df.count() . 出现以下错误:

com.databricks.spark.sqldw.SqlDWSideException: Azure Synapse Analytics failed to execute the JDBC query produced by the connector.

Py4JJavaError: An error occurred while calling o1779.showString.
: com.databricks.spark.sqldw.SqlDWSideException: Azure Synapse Analytics failed to execute the JDBC query produced by the connector.

Underlying SQLException(s):
  - com.microsoft.sqlserver.jdbc.SQLServerException: Query aborted-- the maximum reject threshold (0 rows) was reached while reading from an external source: 1 rows rejected out of total 1 rows processed.
Column ordinal: 2, Expected data type: NVARCHAR(30) collate SQL_Latin1_General_CP1_CI_AS NOT NULL. [ErrorCode = 107090] [SQLState = S0001]

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题