Java / Spark:如何在带有映射结构数组的col中查找具有最大值的键

问题描述 投票:0回答:1

我有一个数据框,我想在映射中获取具有最大值的键。

创建数据框:

Dataset<Row> data = spark.read()
                .option("header", "true")
                .option("inferSchema", "true")
                .csv("/home/path/to/file/verify.csv");
//loading Spark ML model
PipelineModel gloveModel = PipelineModel.load("models/gloveModel");
Dataset<Row> df = gloveModel.transform(data);

df.printSchema();

 |-- id: integer (nullable = true)
 |-- description: string (nullable = true)
 |-- class: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- result: string (nullable = true)     
 |    |    |-- metadata: map (nullable = true)      
 |    |    |    |-- key: string
 |    |    |    |-- value: string (valueContainsNull = true)

//具有地图条目的字段如下:

df.select(“ class.metadata”)。show(10,50);

+-----------------------------------------------------------------------------------------------------------------+
|                                                                                                         metadata|
+-----------------------------------------------------------------------------------------------------------------+
|  [[Sports -> 3.2911853E-9, Business -> 5.1852658E-6, World -> 3.96135E-9, Sci/Tech -> 0.9999949, sentence -> 0]]|
|      [[Sports -> 1.9902605E-10, Business -> 1.0305631E-8, World -> 1.0, Sci/Tech -> 3.543277E-9, sentence -> 0]]|
|    [[Sports -> 1.0, Business -> 8.1944885E-12, World -> 4.554111E-13, Sci/Tech -> 1.7239962E-12, sentence -> 0]]|
+-----------------------------------------------------------------------------------------------------------------+

我想获得以下结果(一行中每个映射的最大值):

+--------------+
|    prediction|
+--------------+
|      Sci/Tech|
|         World|
|        Sports|
+--------------+

我尝试过:

df.select(map_values(col(“ class.metadata”)))。show(10,50);但最终出现错误:

Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot resolve 'map_values(`class`.`metadata`)' due to data type mismatch: argument 1 requires map type, however, '`class`.`metadata`' is of array<map<string,string>> type.;;
'Project [map_values(class#95.metadata) AS map_values(class.metadata)#106]...

df.select(flatten(col(“ class”))))。show();错误:

Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot resolve 'flatten(`class`)' due to data type mismatch: The argument should be an array of arrays, but '`class`' is of array<struct<annotatorType:string,begin:int,end:int,result:string,metadata:map<string,string>,embeddings:array<float>>> type.;;
'Project [flatten(class#95) AS flatten(class)#106]

我的Spark SQL版本是2.4.0(不建议使用爆炸功能)

任何建议/建议都非常感谢!谢谢!

java dataframe apache-spark-sql aggregation apache-spark-mllib
1个回答
0
投票
使用explode从数据数组中提取地图,然后将该地图数据传递给map_values函数。请在下面检查。

import org.apache.spark.sql.functions.explode df.select(explode($"class.metadata").as("metadata")).select(map_values($"metadata")).show(false)

© www.soinside.com 2019 - 2024. All rights reserved.