如何从Scala的文本文件中提取每个单词

问题描述 投票:0回答:1

我对Scala还是很陌生。我有一个文本文件,该文件只有一行,文件单词之间用分号(;)分隔。我想提取每个单词,删除空格,将所有都转换为小写,然后根据每个单词的索引调用它们。下面是我的处理方法:

newListUpper2.txt contains (Bed;  chairs;spoon; CARPET;curtains )
val file = sc.textFile("myfile.txt")
val lower = file.map(x=>x.toLowerCase)
val result = lower.flatMap(x=>x.trim.split(";"))
result.collect.foreach(println)

下面是执行代码时REPL的副本

    scala> val file = sc.textFile("newListUpper2.txt")
    file: org.apache.spark.rdd.RDD[String] = newListUpper2.txt MapPartitionsRDD[5] at textFile at 
    <console>:24
    scala> val lower = file.map(x=>x.toLowerCase)
    lower: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[6] at map at <console>:26
    scala> val result = lower.flatMap(x=>x.trim.split(";"))
    result: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[7] at flatMap at <console>:28
    scala> result.collect.foreach(println)
bed                                                                          
 chairs
spoon
 carpet
curtains
scala> result(0)
<console>:31: error: org.apache.spark.rdd.RDD[String] does not take parameters
       result(0)

结果没有被修整,然后将索引作为参数传递以获得该索引处的单词会产生错误。如果我将每个单词的索引作为参数传递,则我的预期结果应如下所示:

result(0)= bed
result(1) = chairs
result(2) = spoon
result(3) = carpet
result(4) = curtains

我在做什么错?。

我对Scala还是很陌生。我有一个文本文件,该文件只有一行,文件单词之间用分号(;)分隔。我想提取每个单词,删除空格,将所有内容都转换为小写,然后...

scala apache-spark indexing text-files
1个回答
0
投票
newListUpper2.txt contains (Bed;  chairs;spoon; CARPET;curtains )
val file = sc.textFile("myfile.txt")
val lower = file.map(x=>x.toLowerCase)
val result = lower.flatMap(x=>x.trim.split(";")) // x = `bed;  chairs;spoon; carpet;curtains` , x.trim does not work. trim func effective for head and tail only
result.collect.foreach(println)
© www.soinside.com 2019 - 2024. All rights reserved.