我如何在Thrift中使用Spark数据集

问题描述 投票:4回答:1

我的数据格式是用apache节俭定义的,代码是由scrooge生成的。我使用镶木地板将其存储在spark中,非常类似于此blog中的解释。

我可以很容易地将数据读回到Dataframe中,只需这样做:

val df = sqlContext.read.parquet("/path/to/data")

而且我可以在RDD中阅读它并进行更多的体操:

def loadRdd[V <: TBase[_, _]](inputDirectory: String, vClass: Class[V]): RDD[V] = {
    implicit val ctagV: ClassTag[V] = ClassTag(vClass)
    ParquetInputFormat.setReadSupportClass(jobConf, classOf[ThriftReadSupport[V]])
    ParquetThriftInputFormat.setThriftClass(jobConf, vClass)
    val rdd = sc.newAPIHadoopFile(
      inputDirectory, classOf[ParquetThriftInputFormat[V]], classOf[Void], vClass, jobConf)
    rdd.asInstanceOf[NewHadoopRDD[Void, V]].values
  }
loadRdd("/path/to/data", classOf[MyThriftClass])

我的问题是:如何在Spark 1.6发行的新Dataset API中访问该数据?我想要的原因是数据集api的优点:以相同的数据帧速度输入安全性。

我知道需要某种编码器,并且已经为原始类型和案例类提供了这些编码器,但是我所拥有的是节俭生成的代码(java或scala的一种,任何一种都可以满足要求),它看起来确实是很像案例类,但实际上不是一个案例。

我尝试了明显的选项,但没有用:

val df = sqlContext.read.parquet("/path/to/data")

df.as[MyJavaThriftClass]

<console>:25: error: Unable to find encoder for type stored in a Dataset.  Primitive types (Int, String, etc) and Product types (case classes) are supported by importing sqlContext.implicits._  Support for serializing other types will be added in future releases.

df.as[MyScalaThriftClass]

scala.ScalaReflectionException: <none> is not a term
  at scala.reflect.api.Symbols$SymbolApi$class.asTerm(Symbols.scala:199)
  at scala.reflect.internal.Symbols$SymbolContextApiImpl.asTerm(Symbols.scala:84)
  at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$extractorFor(ScalaReflection.scala:492)
  at org.apache.spark.sql.catalyst.ScalaReflection$.extractorsFor(ScalaReflection.scala:394)
  at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$.apply(ExpressionEncoder.scala:54)
  at org.apache.spark.sql.SQLImplicits.newProductEncoder(SQLImplicits.scala:41)
  ... 48 elided


df.as[MyScalaThriftClass.Immutable]

java.lang.UnsupportedOperationException: No Encoder found for org.apache.thrift.protocol.TField
- field (class: "org.apache.thrift.protocol.TField", name: "field")
- array element class: "com.twitter.scrooge.TFieldBlob"
- field (class: "scala.collection.immutable.Map", name: "_passthroughFields")
- root class: "com.worldsense.scalathrift.ThriftRange.Immutable"
  at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$extractorFor(ScalaReflection.scala:597)
  at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$extractorFor$1.apply(ScalaReflection.scala:509)
  at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$extractorFor$1.apply(ScalaReflection.scala:502)
  at scala.collection.immutable.List.flatMap(List.scala:327)
  at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$extractorFor(ScalaReflection.scala:502)
  at org.apache.spark.sql.catalyst.ScalaReflection$.toCatalystArray$1(ScalaReflection.scala:419)
  at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$extractorFor(ScalaReflection.scala:537)
  at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$extractorFor$1.apply(ScalaReflection.scala:509)
  at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$extractorFor$1.apply(ScalaReflection.scala:502)
  at scala.collection.immutable.List.flatMap(List.scala:327)
  at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$extractorFor(ScalaReflection.scala:502)
  at org.apache.spark.sql.catalyst.ScalaReflection$.extractorsFor(ScalaReflection.scala:394)
  at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$.apply(ExpressionEncoder.scala:54)
  at org.apache.spark.sql.SQLImplicits.newProductEncoder(SQLImplicits.scala:41)
  ... 48 elided

[似乎用Thrift生成的works fine不整形,我想知道是否可以用它来生成当前的编码器api可以接受的东西。

有任何提示吗?

apache-spark thrift apache-spark-sql shapeless
1个回答
0
投票

应该有可能通过将Encoders.bean(My.getClass)作为显式隐式传递来解决。

示例:df.as[MyJavaThriftClass](Encoders.bean(MyJavaThriftClass.getClass))

© www.soinside.com 2019 - 2024. All rights reserved.