将scala数据帧转换为数组类型列的数据集

问题描述 投票:0回答:1

我有一个scala数据框,如下所示:

+--------+--------------------+
|     uid|     recommendations|
+--------+--------------------+
|41344966|[[2174, 4.246965E...|
|41345063|[[2174, 0.0015455...|
|41346177|[[2996, 4.137125E...|
|41349171|[[2174, 0.0010590...|

df: org.apache.spark.sql.DataFrame = [uid: int, recommendations: array<struct<iid:int,rating:float>>]

我想将其转换为scala数据集,以利用添加的功能。但是,我是scala的新手,并且在列包含许多数据类型时不清楚如何编写转换类。这就是我所拥有的:

val query = "SELECT * FROM myTable"
val df = spark.sql(query)

case class userRecs (uid: String, recommendations: Array[Int])
val ds = df.as[userRecs]

我得到的错误是:

org.apache.spark.sql.AnalysisException: cannot resolve 'CAST(lambdavariable(MapObjects_loopValue47, MapObjects_loopIsNull47, StructField(iid,IntegerType,true), StructField(rating,FloatType,true), true) AS INT)' due to data type mismatch: cannot cast struct<iid:int,rating:float> to int;

我该如何改写课程?

scala apache-spark apache-spark-dataset
1个回答
1
投票

解决方案是创建一个我的其他类可以使用的类:

case class productScore (iid: Int, rating: Float)
case class userRecs (uid: Int, recommendations: Array[productScore])

val ds = df.as[userRec]
© www.soinside.com 2019 - 2024. All rights reserved.