spark join导致列id歧义错误

问题描述 投票:-1回答:1

我有以下数据帧:

accumulated_results_df
 |-- company_id: string (nullable = true)
 |-- max_dd: string (nullable = true)
 |-- min_dd: string (nullable = true)
 |-- count: string (nullable = true)
 |-- mean: string (nullable = true)

computed_df
 |-- company_id: string (nullable = true)
 |-- min_dd: date (nullable = true)
 |-- max_dd: date (nullable = true)
 |-- mean: double (nullable = true)
 |-- count: long (nullable = false)

尝试使用spark-sql进行连接,如下所示

 val resultDf = accumulated_results_df.as("a").join(computed_df.as("c"), 
                             ( $"a.company_id" === $"c.company_id" ) && ( $"c.min_dd" > $"a.max_dd" ), "left")

给出错误:

org.apache.spark.sql.AnalysisException: Reference 'company_id' is ambiguous, could be: a.company_id, c.company_id.;

我在这里做错了什么以及如何解决这个问题?

apache-spark apache-spark-sql datastax
1个回答
0
投票

我已修好它,如下所示。

val resultDf = accumulated_results_df.join(computed_df.withColumnRenamed("company_id", "right_company_id").as("c"), 
                             (  accumulated_results_df("company_id" ) === $"c.right_company_id" && ( $"c.min_dd" > accumulated_results_df("max_dd") ) )
                        , "left")
© www.soinside.com 2019 - 2024. All rights reserved.