列类型推断为带有类型UDAF的二进制

问题描述 投票:2回答:1

我正在尝试实现一个返回复杂类型的类型化UDAF。不知何故,Spark无法推断结果列的类型,并使binary将序列化数据放在那里。这是一个重现问题的最小例子

import org.apache.spark.sql.expressions.Aggregator
import org.apache.spark.sql.{SparkSession, Encoder, Encoders}

case class Data(key: Int)

class NoopAgg[I] extends Aggregator[I, Map[String, Int], Map[String, Int]] {
    override def zero: Map[String, Int] = Map.empty[String, Int]

    override def reduce(b: Map[String, Int], a: I): Map[String, Int] = b

    override def merge(b1: Map[String, Int], b2: Map[String, Int]): Map[String, Int] = b1

    override def finish(reduction: Map[String, Int]): Map[String, Int] = reduction

    override def bufferEncoder: Encoder[Map[String, Int]] = Encoders.kryo[Map[String, Int]]

    override def outputEncoder: Encoder[Map[String, Int]] = Encoders.kryo[Map[String, Int]]
}

object Question {
  def main(args: Array[String]): Unit = {
      val spark = SparkSession.builder().master("local").getOrCreate()

      val sc = spark.sparkContext

      import spark.implicits._

      val ds = sc.parallelize((1 to 10).map(i => Data(i))).toDS()

      val noop = new NoopAgg[Data]().toColumn

      val result = ds.groupByKey(_.key).agg(noop.as("my_sum").as[Map[String, Int]])

      result.printSchema()
  }
}

它打印

root
 |-- value: integer (nullable = false)
 |-- my_sum: binary (nullable = true)
scala apache-spark apache-spark-sql apache-spark-dataset apache-spark-encoders
1个回答
1
投票

这里没有任何推论。相反,你会得到或多或少的要求。具体来说,错误在于:

override def outputEncoder: Encoder[Map[String, Int]] = Encoders.kryo[Map[String, Int]]

Encoders.kryo表示您应用通用序列化并返回二进制blob。误导性的部分是.as[Map[String, Int]] - 与人们可能预期的不是静态类型检查相反。更糟糕的是,查询计划程序甚至没有主动验证,只有在评估result时才会抛出运行时异常。

result.first
org.apache.spark.sql.AnalysisException: cannot resolve '`my_sum`' due to data type mismatch: cannot cast binary to map<string,int>;
  at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
  at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$3.applyOrElse(CheckAnalysis.scala:115)
...

你应该提供具体的Encodereither explicitly

import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder  

def outputEncoder: Encoder[Map[String, Int]] = ExpressionEncoder()

或含蓄地

class NoopAgg[I](implicit val enc: Encoder[Map[String, Int]]) extends Aggregator[I, Map[String, Int], Map[String, Int]] {
  ...
  override def outputEncoder: Encoder[Map[String, Int]] = enc
}

作为一个副作用,它将使as[Map[String, Int]]过时,因为Aggregator的返回类型已经知道。

© www.soinside.com 2019 - 2024. All rights reserved.