将表从azure DataRicks迁移到数据仓库(azure synapse)

6ju8rftf  于 2021-07-13  发布在  Spark
关注(0)|答案(0)|浏览(288)

我正在将一个表从azuredatabricks迁移到数据仓库(azuresynapse)中的一个表。
集群具有以下配置:spark:3.0.1 scala:2.12
迁移的代码是scala,如下所示:

...
val exitValue = exitValueGenerator(List("global_temp.wet_yield_temp"))
val delta = Entity(RenamedTable("global_temp", "wet_yield_temp", "sensor_damaged", "wet_yield_sensor_damaged_combine_history"), None, Some(FullReload), None)
var res = Source.migrate(deltalake, dwh, List(delta))

我收到一条错误消息:

Processing global_temp.wet_yield_temp...
SELECT * FROM global_temp.wet_yield_temp
Moving 54 records...
delta: Entity = Entity(RenamedTable(global_temp,wet_yield_temp,sensor_damaged,wet_yield_sensor_damaged_combine_history),None,Some(FullReload),None)
res: Iterable[scala.util.Try[String]] =
List(Failure(com.databricks.spark.sqldw.SqlDWSideException: Azure Synapse Analytics failed to execute the JDBC query produced by the connector.
Underlying SQLException(s):
  - com.microsoft.sqlserver.jdbc.SQLServerException: HdfsBridge::recordReaderFillBuffer - Unexpected error encountered filling record reader buffer: IllegalArgumentException: Must be 12 bytes [ErrorCode = 106000] [SQLState = S0001]
         ))

为什么我会有这个错误?
databricsk中“湿产量温度”表的模式为:

unitsn:string
brand:string
type:string
model:string
date:date
time_start:timestamp
time_end:timestamp
alert:integer
duration_mins:double
severity:integer
three_days_severity:integer
triggerName:string

azure synapse中目标的“sensor\u damaged.wet\u yield\u sensor\u damaged\u combine\u history”表的架构为:

CREATE TABLE [sensor_damaged].[wet_yield_sensor_damaged_combine_history]
(
    [unitsn] [nvarchar](250) NULL,
    [brand] [nvarchar](250) NULL,
    [type] [nvarchar](250) NULL,
    [model] [nvarchar](250) NULL,
    [date] [date] NULL,
    [time_start] [datetime] NULL,
    [time_end] [datetime] NULL,
    [alert] [int] NULL,
    [duration_mins] [float] NULL,
    [severity] [int] NULL,
    [three_days_severity] [int] NULL,
    [triggerName] [nvarchar](150) NULL
)

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题