TypeError:数据应该是LabeledPoint的RDD,但得到了

问题描述 投票:0回答:1

我收到错误:

TypeError: data should be an RDD of LabeledPoint, but got <type 'numpy.ndarray'>

当我执行时:

import sys
import numpy as np
from pyspark import SparkConf, SparkContext
from pyspark.mllib.classification import LogisticRegressionWithSGD


conf = (SparkConf().setMaster("local")
.setAppName("Logistic Regression")
.set("spark.executor.memory", "1g"))
sc = SparkContext(conf = conf) 


def mapper(line):
    feats = line.strip().split(",") 
    label = feats[len(feats) - 1]       # Last column is the label
    feats = feats[2: len(feats) - 1]    # remove id and type column
    feats.insert(0,label)
    features = [ float(feature) for feature in feats ] # need floats
    return np.array(features)

data = sc.textFile("test.csv")
parsedData = data.map(mapper)

# Train model
model = LogisticRegressionWithSGD.train(parsedData)

我在model = LogisticRegressionWithSGD.train(parsedData)线上得到了错误。

parsedData应该是一个RDD。我不知道为什么我会这样做。

Github链接到full source code

python numpy apache-spark pyspark
1个回答
0
投票

parsedData应该是一个RDD。我不知道为什么我会这样做。

问题不是parsedData不是RDD,问题是它存储的。正如消息所说,当你通过RDD[LabeledPoint]时需要RDD[numpy.ndarray]

from pyspark.mllib.regression import LabeledPoint

def mapper(line):
    ...
    return LabeledPoint(label, features)
热门问题
推荐问题
最新问题