如何使更快的窗口文本文件和机器学习在火花窗口

问题描述 投票:0回答:1

我正在尝试使用Spark来学习窗口化文本文件中的多类逻辑回归。我正在做的是首先创建窗口并将它们分解为$"word_winds"。然后将每个窗口的中心字移动到$"word"。为了适应LogisticRegression模型,我将每个不同的单词转换为一个类($"label"),从而学习。我认为不同的标签倾向于那些minF样本很少的人。

问题是代码的某些部分非常慢,即使对于小的输入文件(您可以使用一些README文件来测试代码)。谷歌搜索,一些用户通过使用slowness体验explode。他们建议对代码进行一些修改,以加快2倍速度。但是,我认为使用100MB输入文件,这是不够的。请提出不同的建议,可能是为了避免使代码速度变慢的行为。我在24核机器上使用Spark 2.4.0和sbt 1.2.8。

import org.apache.spark.sql.functions._
import org.apache.spark.ml.classification.LogisticRegression
import org.apache.spark.ml.feature.{HashingTF, IDF}
import org.apache.spark.ml.feature.StringIndexer
import org.apache.spark.sql.SparkSession
import org.apache.spark.storage.StorageLevel
import org.apache.spark.sql.types._



object SimpleApp {
  def main(args: Array[String]) {

val spark = SparkSession.builder().getOrCreate()
import spark.implicits._

spark.sparkContext.setCheckpointDir("checked_dfs")

val in_file = "sample.txt"
val stratified = true
val wsize = 7
val ngram = 3
val minF = 2

val windUdf = udf{s: String => s.sliding(ngram).toList.sliding(wsize).toList}
val get_mid = udf{s: Seq[String] => s(s.size/2)}
val rm_punct = udf{s: String => s.replaceAll("""([\p{Punct}|¿|\?|¡|!]|\p{C}|\b\p{IsLetter}{1,2}\b)\s*""", "")}

// Read and remove punctuation
var df = spark.read.text(in_file)
                    .withColumn("value", rm_punct($"value"))

// Creating windows and explode them, and get the center word into $"word" 
df = df.withColumn("char_nGrams", windUdf('value))
        .withColumn("word_winds", explode($"char_nGrams"))
        .withColumn("word", get_mid('word_winds))
val indexer = new StringIndexer().setInputCol("word")
                                    .setOutputCol("label")
df = indexer.fit(df).transform(df)

val hashingTF = new HashingTF().setInputCol("word_winds")
                                .setOutputCol("freqFeatures")
df = hashingTF.transform(df)
val idf = new IDF().setInputCol("freqFeatures")
                    .setOutputCol("features")
df = idf.fit(df).transform(df)
// Remove word whose freq is less than minF
var counts = df.groupBy("label").count
                                .filter(col("count") > minF)
                                .orderBy(desc("count"))
                                .withColumn("id", monotonically_increasing_id())
var filtro = df.groupBy("label").count.filter(col("count") <= minF)
df = df.join(filtro, Seq("label"), "leftanti")
var dfs = if(stratified){
// Create stratified sample 'dfs'
        var revs = counts.orderBy(asc("count")).select("count")
                                                .withColumn("id", monotonically_increasing_id())
        revs = revs.withColumnRenamed("count", "ascc")
// Weigh the labels (linearly) inversely ("ascc") proportional NORMALIZED weights to word ferquency

        counts = counts.join(revs, Seq("id"), "inner").withColumn("weight", col("ascc")/df.count)
        val minn = counts.select("weight").agg(min("weight")).first.getDouble(0) - 0.01
        val maxx = counts.select("weight").agg(max("weight")).first.getDouble(0) - 0.01
        counts = counts.withColumn("weight_n", (col("weight") - minn) / (maxx - minn))
        counts = counts.withColumn("weight_n", when(col("weight_n") > 1.0, 1.0)
                       .otherwise(col("weight_n")))
        var fractions = counts.select("label", "weight_n").rdd.map(x => (x(0), x(1)
                                .asInstanceOf[scala.Double])).collectAsMap.toMap
        df.stat.sampleBy("label", fractions, 36L).select("features", "word_winds", "word", "label")
        }else{ df }
dfs = dfs.checkpoint()

val lr = new LogisticRegression().setRegParam(0.01)

val Array(tr, ts) = dfs.randomSplit(Array(0.7, 0.3), seed = 12345)
val training = tr.select("word_winds", "features", "label", "word")
val test = ts.select("word_winds", "features", "label", "word")

val model = lr.fit(training)

def mapCode(m: scala.collection.Map[Any, String]) = udf( (s: Double) =>
                m.getOrElse(s, "")
        )
var labels = training.select("label", "word").distinct.rdd
                                             .map(x => (x(0), x(1).asInstanceOf[String]))
                                             .collectAsMap
var predictions = model.transform(test)
predictions = predictions.withColumn("pred_word", mapCode(labels)($"prediction"))
predictions.write.format("csv").save("spark_predictions")

spark.stop()
  }
}
scala apache-spark apache-spark-mllib
1个回答
0
投票

由于您的数据有点小,因此如果您在爆炸前使用合并可能会有所帮助。有时候,如果你的代码中存在大量的混乱,那么拥有太多节点会很低效。

就像你说的那样,似乎很多人都有爆炸问题。我看了你提供的链接,但没有人提到尝试flatMap而不是爆炸。

© www.soinside.com 2019 - 2024. All rights reserved.