pyspark加载csv attributeerror:'rdd'对象没有属性'\u get\u object\u id'

x3naxklr  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(417)

我正在尝试将csv文件加载到sparkDataframe中。这就是我目前所做的:


# sc is an SparkContext.

appName = "testSpark"
master = "local"
conf = SparkConf().setAppName(appName).setMaster(master)
sc = SparkContext(conf=conf)
sqlContext = sql.SQLContext(sc)

# csv path

text_file = sc.textFile("hdfs:///path/to/sensordata20171008223515.csv")
df = sqlContext.load(source="com.databricks.spark.csv", header = 'true', path = text_file)

print df.schema()

以下是跟踪:

Traceback (most recent call last):
File "/home/centos/main.py", line 16, in <module>
df = sc.textFile(text_file).map(lambda line: (line.split(';')[0], line.split(';')[1])).collect()
File "/usr/hdp/2.5.6.0-40/spark/python/lib/pyspark.zip/pyspark/context.py", line 474, in textFile
File "/usr/hdp/2.5.6.0-40/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 804, in __call__
File "/usr/hdp/2.5.6.0-40/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 278, in get_command_part
AttributeError: 'RDD' object has no attribute '_get_object_id'

我是新来的。所以如果有人能告诉我我做错了什么,这会很有帮助。

cig3rfwq

cig3rfwq1#

无法将rdd传递给csv读取器。您应该直接使用路径:

df = sqlContext.load(source="com.databricks.spark.csv", 
    header = 'true', path = "hdfs:///path/to/sensordata20171008223515.csv")

只有少数格式(特别是json)支持rdd作为输入参数。

相关问题