未使用python代码在mapreduce中获得我的预期输出

问题描述 投票:1回答:1

[运行此代码以获取Hadoop中的概率],我的数据在CSV文件中大约10k +。我正在使用Google DataProc Cluster运行此代码。请告诉我如何获得期望的输出。最后一件事可能是逻辑问题或功能问题。

#!/usr/bin/env python3
"""mapper.py"""
import sys

# Get input lines from stdin
for line in sys.stdin:
    # Remove spaces from beginning and end of the line
    line = line.strip()

    # Split it into tokens
    #tokens = line.split()

    #Get probability_mass values
    for probability_mass in line:
        print("None\t{}".format(probability_mass))
#!/usr/bin/env python3
"""reducer.py"""
import sys
from collections import defaultdict


counts = defaultdict(int)

# Get input from stdin
for line in sys.stdin:
    #Remove spaces from beginning and end of the line
    line = line.strip()

    # skip empty lines
    if not line:
        continue  

    # parse the input from mapper.py
    k,v = line.split('\t', 1)
    counts[v] += 1

total = sum(counts.values())
probability_mass = {k:v/total for k,v in counts.items()}
print(probability_mass)

我的CSV文件看起来像这样。

probability_mass
10
10
60
10
30
Expected output Probability of each number

{10: 0.6, 60: 0.2, 30: 0.2}

but result still show like this 
{1:0} {0:0} {3:0} {6:0} {1:0} {6:0}

我将把此命令保存在nano中,然后运行它。

yarn jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar \
-D mapred.output.key.comparator.class=org.apache.hadoop.mapred.lib.KeyFieldBasedComparator \
-D mapred.text.key.comparator.options=-n \
-files mapper.py,reducer.py \
-mapper "python mapper.py" \
-reducer "python reducer.py" \
-input /tmp/data.csv \
-output /tmp/output
python python-3.x hadoop mapreduce hadoop-streaming
1个回答
0
投票

您正在将行分割为单个字符,这将解释为什么您将1、3、6、0等作为地图键。

不要循环,只需打印该行的值即可。

print("None\t{}".format(line))

然后,您将一个整数除以一个较大的整数,这将舍入到最接近的整数,即0。

您可以通过执行此操作来解决此问题>

total = float(sum(counts.values()))
© www.soinside.com 2019 - 2024. All rights reserved.