Spark-字数替代方法

问题描述 投票:1回答:2

我有100万行,除了将每个单词映射到1然后按键减少它的传统方法之外,是否有其他方法可以在spark中实现单词计数?

传统方法:

JavaPairRDD<String, Integer> counts = textFile.flatMap(s -> 
                         Arrays.asList(SPACE.split(s)).iterator())
        .mapToPair(s -> new Tuple2<>(s, 1))
        .reduceByKey((a, b) -> a + b);

任何新方法?

apache-spark word-count
2个回答
0
投票

肯定有很多方法可以做到这一点。这里是2:

一个:平面地图并制作数据框:

JavaRDD<Row> rowRdd = spark.read()
        .textFile("loremipsum.txt")
        .javaRDD()
        .flatMap(s -> Arrays.asList(s.split(" ")).iterator())
        .map(s -> RowFactory.create(s));
spark.createDataFrame(rowRdd,
                      new StructType()
                     .add(DataTypes.createStructField("word", DataTypes.StringType, true)))
      .groupBy("word")
      .count()
      .show();

打印类似:

+------------+-----+
|        word|count|
+------------+-----+
|         Sit|   17|
|        Elit|    6|
|   vehicula.|    2|
|       eros.|    2|
|        nam.|    3|
|   porttitor|   18|
|consectetur.|    6|
...

奖金:使用SQL进行分组(如果还有其他选择的话)

[二:按单词分组并计数可迭代项中的元素:

Map<String, Long> counts = spark.read().textFile("loremipsum.txt")
        .javaRDD()
        .flatMap(s -> Arrays.asList(s.split(" ")).iterator())
        .groupBy(i -> i)
        .aggregateByKey(0L, (id, it) -> countIterable(it), (a, b) -> a + b)

        .collect() //collection of Tuple2: you can stop here
        .stream()
        .collect(Collectors.toMap(t -> t._1, t -> t._2));

导致类似:

{=50, Malesuada=4, justo.=3, potenti=2, vel.=11, purus=30, curabitur.=2...}

[countIterable被定义为:

private static <T> long countIterable(Iterable<T> it) {
    long res = 0;
    for (T t : it)
        res += 1;
    return res;
}

也可以实现为

return StreamSupport.stream(it.spliterator(), false).count();

0
投票

导入需要的软件包,例如“ org.apache.spark.sql.functions。*”

Scala:

val strDF = spark.read.text("test.txt")
strDF.select(explode(split(col("line")," ")).as("word")).groupBy(col("word")).count.show

Java:

String filePath = "/test.txt";
Dataset<Row> lines = sparkSession.read().text(filePath).toDF("line");
lines.select(explode(split(col("line")," ")).as("word")).groupBy("word").count().show();

© www.soinside.com 2019 - 2024. All rights reserved.