我有Array [Row]我正在使用case类来映射它以获得RDD
case class MyClass(string,long)
sparkSession.sparkContext.
parallelize(row.map(r1 =>
MyClass(r1.getString(0).concat(r1.getString(1)),
r1.getLong(2))))
数组行中有3个字段。我想连接1和2个字段... r1.getString(0).concat(r1.getString(1))带分隔符“ - ”
input array row = [“string1”,“string2”,someLOngnum]
案例类= [“string1-string2”,someLongnum]的预期输出RDD
你可以试试这个:
输入:
val rdd = sc.parallelize(data)
//data: Array[org.apache.spark.sql.Row] = Array([AAA,a,100], [BBB,b,200], [CCC,c,300])
val result = rdd.map(r => myClass(r.getString(0) +'-'+ r.getString(1),r.getLong(2)))
//result: org.apache.spark.rdd.RDD[myClass] = MapPartitionsRDD[15]
输出:
result.collect.foreach(println)
//myClass(AAA-a,100)
//myClass(BBB-b,200)
//myClass(CCC-c,300)