pyspark:仅基于rdd的操作

问题描述 投票:1回答:1

我正在尝试仅使用基于rdd的操作。我有一个与此类似的文件;

0, Alpha,-3.9, 4, 2001-02-01, 5, 20
0, Beta,-3.8, 3, 2002-02-01, 6, 21
1, Gamma,-3.7, 8, 2003-02-01, 7, 22
0, Alpha,-3.5, 5, 2004-02-01, 8, 23
0, Alpha,-3.9, 6, 2005-02-01, 8, 27

首先,我按如下方式将数据加载到rdd中,

rdd = sc.textFile(myDataset)

然后,我对每个原始文件中first elements的不同元素都感兴趣。含义Alpha, Beta, Gamma。在这种情况下,我希望3个不同的元素。这就是我所做的,

coll = [] # to collect the distinct elements
list_ = rdd.collect() # to get the list
for i in list_:
    result = myFun(i) # this function I created to process line by line and return a tuple.
    if result[1] not in coll:
        coll.append(result[1])

仅基于rdd的操作,有没有更快/更好的方法?

python python-3.x pyspark bigdata rdd
1个回答
0
投票

您可以将mapdistinct一起使用,如下所示:

rdd = sc.textFile('path/to/file/input.txt')
rdd.take(10)
#[u'0, Alpha,-3.9, 4, 2001-02-01, 5, 20', u'0, Beta,-3.8, 3, 2002-02-01, 6, 21', u'1, Gamma,-3.7, 8, 2003-02-01, 7, 22', u'0, Alpha,-3.5, 5, 2004-02-01, 8, 23', u'0, Alpha,-3.9, 6, 2005-02-01, 8, 27']

list_ = rdd.map(lambda line: line.split(",")).map(lambda e : e[1]).distinct().collect() 

list_
[u' Alpha', u' Beta', u' Gamma']
© www.soinside.com 2019 - 2024. All rights reserved.