通过在spark中使用scala加载csv文件来创建数据帧

问题描述 投票:0回答:1

但csv文件添加了额外的双引号,导致所有列成为单列

有四列,标题和2行

"""SlNo"",""Name"",""Age"",""contact"""
"1,""Priya"",78,""Phone"""
"2,""Jhon"",20,""mail"""

val df = sqlContext.read.format("com.databricks.spark.csv").option("header","true").option("delimiter",",").option("inferSchema","true").load ("bank.csv") 
df: org.apache.spark.sql.DataFrame = ["SlNo","Name","Age","contact": string]
scala csv apache-spark dataframe apache-spark-sql
1个回答
1
投票

你可以做的是使用sparkContext读取并用空替换所有"并使用zipWithIndex()分隔标题和文本数据,以便可以创建自定义模式和行rdd数据。最后,只需在sqlContext的createDataFrame api中使用行rdd和schema

//reading text file, replacing and splitting and finally zipping with index
val rdd = sc.textFile("bank.csv").map(_.replaceAll("\"", "").split(",")).zipWithIndex()
//separating header to form schema
val header = rdd.filter(_._2 == 0).flatMap(_._1).collect()
val schema = StructType(header.map(StructField(_, StringType, true)))
//separating data to form row rdd
val rddData = rdd.filter(_._2 > 0).map(x => Row.fromSeq(x._1))
//creating the dataframe
sqlContext.createDataFrame(rddData, schema).show(false)

你应该得到

+----+-----+---+-------+
|SlNo|Name |Age|contact|
+----+-----+---+-------+
|1   |Priya|78 |Phone  |
|2   |Jhon |20 |mail   |
+----+-----+---+-------+

我希望答案是有帮助的

© www.soinside.com 2019 - 2024. All rights reserved.