要求失败:johnsnowlabs.nlp中的inputCols注释符错误或缺失

问题描述 投票:0回答:1

我正在将com.johnsnowlabs.nlp-2.2.2与spark-2.4.4配合使用以处理某些文章。在这些文章中,有一些很长的词我不感兴趣,这会使POS标记lot的速度变慢。我想在标记化之后和POSTagging之前排除它们。

我试图编写较小的代码来重现我的问题

import sc.implicits._
val documenter = new DocumentAssembler().setInputCol("text").setOutputCol("document").setIdCol("id")
val tokenizer = new Tokenizer().setInputCols(Array("document")).setOutputCol("token")
val normalizer = new Normalizer().setInputCols("token").setOutputCol("normalized").setLowercase(true)

val df = Seq("This is a very useless/ugly sentence").toDF("text")

val document = documenter.transform(df.withColumn("id", monotonically_increasing_id()))
val token = tokenizer.fit(document).transform(document)

val token_filtered = token
  .drop("token")
  .join(token
    .select(col("id"), col("token"))
    .withColumn("tmp", explode(col("token")))
    .groupBy("id")
    .agg(collect_list(col("tmp")).as("token")),
    Seq("id"))
token_filtered.select($"token").show(false)
val normal = normalizer.fit(token_filtered).transform(token_filtered)

转换token_filtered时出现此错误

+--------------------+---+--------------------+--------------------+--------------------+
|                text| id|            document|            sentence|               token|
+--------------------+---+--------------------+--------------------+--------------------+
|This is a very us...|  0|[[document, 0, 35...|[[document, 0, 35...|[[token, 0, 3, Th...|
+--------------------+---+--------------------+--------------------+--------------------+


Exception in thread "main" java.lang.IllegalArgumentException:
requirement failed: Wrong or missing inputCols annotators in NORMALIZER_4bde2f08742a.
Received inputCols: token.
Make sure such annotators exist in your pipeline, with the right output
names and that they have following annotator types: token

[如果我直接适合并转换token中的normalizer,效果很好似乎在explode /groupBy/ collect_list期间丢失了一些信息,但架构和数据看起来相同。

任何想法?

scala apache-spark-sql johnsnowlabs-spark-nlp
1个回答
0
投票

答案是:不可行。 (https://github.com/JohnSnowLabs/spark-nlp/issues/653

注释者在groupBy操作中被破坏。

解决方案之一:

  • 实现自定义Transformer
  • 使用UDF
  • 在将数据提供给管道之前对其进行预处理
© www.soinside.com 2019 - 2024. All rights reserved.