以标头作为拼花形式编写pyspark数据帧

问题描述 投票:0回答:1

因此,如果我先执行df = sql_context.read.csv("test_data_2019-01-01.csv", header=False),然后执行df.write.parquet("test_data_2019-01-01.parquet"),则一切正常,但是如果我在header=True中设置了read.csv,然后尝试编写,则会出现以下错误:

An error occurred while calling o522.parquet. : org.apache.spark.sql.AnalysisException: Attribute name " M6_Debt_Review_Ind" contains invalid character(s) among " ,;{}()\n\t=". Please use alias to rename it.

我需要那些标题,否则列名称如下所示:

[Row(_c0='foo', _c1='bar', _c2='bla', _c3='bla2', _c4='blabla', _c5='bla3', _c6=' bla4'), Row(_c0='1161057', _c1='57793622', _c2='6066807', _c3='2017-01-31', _c4='2017-01-31', _c5='1', _c6='0'), Row(_c0='1177047', _c1='58973984', _c2='4938603', _c3='2017-02-28', _c4='2017-02-28', _c5='0', _c6='0')]

而不是

[Row(foo='1161057', bar='57793622', bla='6066807', bla2='2017-01-31', blabla='2017-01-31', bla3='1', M6_Debt_Review_Ind='0'), Row(foo='1177047', bar='58973984', bla='4938603', bla2='2017-02-28', blabla='2017-02-28', bla3='0', bla4='0')]

谢谢您的任何建议。

pyspark pyspark-sql parquet pyspark-dataframes
1个回答
0
投票

没关系,愚蠢的错误。列名称中有一个空格。

© www.soinside.com 2019 - 2024. All rights reserved.