读取spark2中的文本文件

问题描述 投票:2回答:1

我试图使用python读取spark 2.3中的文本文件,但是我得到了这个错误。这是textFile所在的格式:

name marks
amar 100
babul 70
ram 98
krish 45

码:

df=spark.read.option("header","true")\
    .option("delimiter"," ")\
    .option("inferSchema","true")\
    .schema(
        StructType(
            [
                StructField("Name",StringType()),
                StructField("marks",IntegerType())
            ]
        )
    )\
    .text("file:/home/maria_dev/prac.txt") 

错误:

java.lang.AssertionError: assertion failed: Text data source only
produces a single data column named "value"

当我尝试将文本文件读入RDD时,它被收集为单个列。

应该更改数据文件还是应该更改我的代码?

pyspark apache-spark-2.2
1个回答
3
投票

而不是.text(仅产生单值列)使用.csv将文件加载到DF中。

>>> df=spark.read.option("header","true")\
    .option("delimiter"," ")\
    .option("inferSchema","true")\
    .schema(
        StructType(
            [
                StructField("Name",StringType()),
                StructField("marks",IntegerType())
            ]
        )
    )\
    .csv('file:///home/maria_dev/prac.txt') 

>>> from pyspark.sql.types import *
>>> df
DataFrame[Name: string, marks: int]
>>> df.show(10,False)
+-----+-----+
|Name |marks|
+-----+-----+
|amar |100  |
|babul|70   |
|ram  |98   |
|krish|45   |
+-----+-----+
© www.soinside.com 2019 - 2024. All rights reserved.