Scala / Spark:如何将此参数传递给.select语句

问题描述 投票:-3回答:1

我有办法获得一个有效的数据框的子集:

This works
val subset_cols = {joinCols :+ col}
val df1_subset = df1.select(subset_cols.head, subset_cols.tail: _*)

这不起作用:(代码编译但我得到运行时错误)

val subset_cols = {joinCols :+ col}
val df1_subset = df1.select(subset_cols.deep.mkString(","))

错误:

Exception in thread "main" org.apache.spark.sql.AnalysisException: 
cannot resolve '`first_name,last_name,rank_dr`' given input columns: 
[model, first_name, service_date, rank_dr, id, purchase_date, 
dealer_id, purchase_price, age, loyalty_score, vin_num, last_name, color];;

'Project ['first_name,last_name,rank_dr]

我试图将subset_cols传递给.select方法,但似乎我错过了某种格式。有人可以协助。

谢谢

scala apache-spark
1个回答
1
投票

你做的是:

df1.select("first_name,last_name,rank_dr")

Spark尝试找到一个名为"first_name,last_name,rank_dr"的列,该列不存在

尝试:

val df1_subset = df1.selectExpr(subset_cols: _*) 
© www.soinside.com 2019 - 2024. All rights reserved.