启用了书签的aws粘合作业失败,原因是“数据源不支持写入空的或嵌套的空模式”

yk9xbfzb  于 2021-05-27  发布在  Spark
关注(0)|答案(1)|浏览(305)

我在awsaglue(版本1.0)中有一个python3作业,它启用了书签。此作业将json数据源转换为s3 bucket中的parquet文件格式。这项工作运行完美的第一次,或者如果我重置书签。
但是,后续运行会失败,并出现以下错误。
analysisexception:“\n数据源不支持写入空的或嵌套的空架构。\n请确保数据架构至少有一列或多列。\n;”
所使用的脚本是由aws控制台生成的,没有任何修改,源代码是使用data catalog的s3 bucket中的json文件,输出是另一个bucket。

import sys
    from awsglue.transforms import *
    from awsglue.utils import getResolvedOptions
    from pyspark.context import SparkContext
    from awsglue.context import GlueContext
    from awsglue.job import Job

    ## @params: [JOB_NAME]
    args = getResolvedOptions(sys.argv, ['JOB_NAME'])

    sc = SparkContext()
    glueContext = GlueContext(sc)
    spark = glueContext.spark_session
    job = Job(glueContext)
    job.init(args['JOB_NAME'], args)
    ## @type: DataSource
    ## @args: [database = "segment", table_name = "segment_zlw54zvojf", transformation_ctx = "datasource0"]
    ## @return: datasource0
    ## @inputs: []
    datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "segment", table_name = "segment_zlw54zvojf", transformation_ctx = "datasource0")
    ## @type: ApplyMapping
    ## @args: [mapping = [("channel", "string", "channel", "string"), ("context", "struct", "context", "struct"), ("event", "string", "event", "string"), ("integrations", "struct", "integrations", "struct"), ("messageid", "string", "messageid", "string"), ("projectid", "string", "projectid", "string"), ("properties", "struct", "properties", "struct"), ("receivedat", "string", "receivedat", "string"), ("timestamp", "string", "timestamp", "string"), ("type", "string", "type", "string"), ("userid", "string", "userid", "string"), ("version", "int", "version", "int"), ("anonymousid", "string", "anonymousid", "string"), ("partition_0", "string", "partition_0", "string")], transformation_ctx = "applymapping1"]
    ## @return: applymapping1
    ## @inputs: [frame = datasource0]
    applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("channel", "string", "channel", "string"), ("context", "struct", "context", "struct"), ("event", "string", "event", "string"), ("integrations", "struct", "integrations", "struct"), ("messageid", "string", "messageid", "string"), ("projectid", "string", "projectid", "string"), ("properties", "struct", "properties", "struct"), ("receivedat", "string", "receivedat", "string"), ("timestamp", "string", "timestamp", "string"), ("type", "string", "type", "string"), ("userid", "string", "userid", "string"), ("version", "int", "version", "int"), ("anonymousid", "string", "anonymousid", "string"), ("partition_0", "string", "partition_0", "string")], transformation_ctx = "applymapping1")
    ## @type: ResolveChoice
    ## @args: [choice = "make_struct", transformation_ctx = "resolvechoice2"]
    ## @return: resolvechoice2
    ## @inputs: [frame = applymapping1]
    resolvechoice2 = ResolveChoice.apply(frame = applymapping1, choice = "make_struct", transformation_ctx = "resolvechoice2")
    ## @type: DropNullFields
    ## @args: [transformation_ctx = "dropnullfields3"]
    ## @return: dropnullfields3
    ## @inputs: [frame = resolvechoice2]
    dropnullfields3 = DropNullFields.apply(frame = resolvechoice2, transformation_ctx = "dropnullfields3")
    ## @type: DataSink
    ## @args: [connection_type = "s3", connection_options = {"path": "s3://mydestination.datalake.raw/segment/iterable"}, format = "parquet", transformation_ctx = "datasink4"]
    ## @return: datasink4
    ## @inputs: [frame = dropnullfields3]
    datasink4 = glueContext.write_dynamic_frame.from_options(frame = dropnullfields3, connection_type = "s3", connection_options = {"path": "s3://mydestination.datalake.raw/segment/iterable"}, format = "parquet", transformation_ctx = "datasink4")
    job.commit()

任何建议都将不胜感激。

ujv3wf0j

ujv3wf0j1#

所以我把这个问题搞清楚了。
根据源s3,bucket每天都有新的数据写入其中。但是,这些数据会写入我的s3存储桶中的新子文件夹中。
为了让这些新的子文件夹被aws glue作业识别,我需要重新运行aws crawler来更新源数据目录。
如果不这样做,就不会识别新的数据,并且默认的aws生成的脚本尝试写入空数据集,但失败了。
为了解决这个问题,我计划在执行作业之前执行爬虫。

相关问题