:java.sql.sqlexception:尝试使用databricks connect在databricks集群上运行python脚本时没有合适的驱动程序

i86rm4rw  于 2021-07-14  发布在  Spark
关注(0)|答案(1)|浏览(436)

我正在尝试使用databricks connect在databricks集群上运行visualstudio代码中的python脚本。apachespark连接器的jar文件:sqlserver和azuresql已经安装在databricks集群上。
但是我得到了一个错误:
py4j.protocol.py4jjavaerror:调用o42.save时出错:java.sql.sqlexception:没有合适的驱动程序

from pyspark.sql import SparkSession, DataFrame
from db_connection import fetch_db_config
from pyspark.context import SparkContext
from pyspark import sql
from IPython.display import display

## Fetch Data from delta table into Data frame

def fetch_data_from_delta(
  spark: SparkSession, 
  sql: sql
  ): 
  global df
  df = spark.sql("select * from lending_club_acc_loans limit 10000")
  # newdf = df.repartition(8)
  return df

def load_df_to_db():

  jdbchostname, jdbcurl, jdbcusername, jdbcpassword, connection_properties = fetch_db_config()

  try:
    df.write \
      .format("com.microsoft.sqlserver.jdbc.spark") \
      .mode("overwrite") \
      .option("url", jdbcurl) \
      .option("dbtable", "lending_club_acc_loans") \
      .option("user", jdbcusername) \
      .option("password", jdbcpassword) \
      .option("tableLock", "true") \
      .option("batchsize", "500") \
      .option("reliabilityLevel", "BEST_EFFORT") \
      .save()

  except ValueError as error :
      print("Connector write failed", error)

def run():
  spark = SparkSession.builder.config('spark.driver.extraClassPath','dbfs:/FileStore/jars/18df20ee_ae05_45d0_ba41_89502b080b51-spark_mssql_connector_2_12_3_0_1_0_0_alpha-a6e1c.jar').getOrCreate()

  fetch_data_from_delta(spark, sql)

  load_df_to_db()

if __name__ == "__main__":
    run()

有人能帮我解决这个问题吗?

kgqe7b3p

kgqe7b3p1#

发现了问题。问题主要是在.option代码中传递的jdbcurl参数。我硬编码的网址和测试的代码,它运行成功。

相关问题