pyspark.sql.utils.IllegalArgumentException:u'Field“features”不存在。

问题描述 投票:0回答:2

我正在尝试执行随机森林分类器并使用交叉验证来评估模型。我使用pySpark。输入CSV文件作为Spark DataFrame格式加载。但是在构建模型时我遇到了一个问题。

下面是代码。

from pyspark import SparkContext
from pyspark.sql import SQLContext
from pyspark.ml import Pipeline
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.mllib.evaluation import BinaryClassificationMetrics
sc = SparkContext()
sqlContext = SQLContext(sc)
trainingData =(sqlContext.read
         .format("com.databricks.spark.csv")
         .option("header", "true")
         .option("inferSchema", "true")
         .load("/PATH/CSVFile"))
numFolds = 10 
rf = RandomForestClassifier(numTrees=100, maxDepth=5, maxBins=5, labelCol="V5409",featuresCol="features",seed=42)
evaluator = MulticlassClassificationEvaluator().setLabelCol("V5409").setPredictionCol("prediction").setMetricName("accuracy")
paramGrid = ParamGridBuilder().build()

pipeline = Pipeline(stages=[rf])
paramGrid=ParamGridBuilder().build()
crossval = CrossValidator(
    estimator=pipeline,
    estimatorParamMaps=paramGrid,
    evaluator=evaluator,
    numFolds=numFolds)
model = crossval.fit(trainingData)
print accuracy

我收到了以下错误

Traceback (most recent call last):
  File "SparkDF.py", line 41, in <module>
    model = crossval.fit(trainingData)
  File "/usr/local/spark-2.1.1/python/pyspark/ml/base.py", line 64, in fit
    return self._fit(dataset)
  File "/usr/local/spark-2.1.1/python/pyspark/ml/tuning.py", line 236, in _fit
    model = est.fit(train, epm[j])
  File "/usr/local/spark-2.1.1/python/pyspark/ml/base.py", line 64, in fit
    return self._fit(dataset)
  File "/usr/local/spark-2.1.1/python/pyspark/ml/pipeline.py", line 108, in _fit
    model = stage.fit(dataset)
  File "/usr/local/spark-2.1.1/python/pyspark/ml/base.py", line 64, in fit
    return self._fit(dataset)
  File "/usr/local/spark-2.1.1/python/pyspark/ml/wrapper.py", line 236, in _fit
    java_model = self._fit_java(dataset)
  File "/usr/local/spark-2.1.1/python/pyspark/ml/wrapper.py", line 233, in _fit_java
    return self._java_obj.fit(dataset._jdf)
  File "/home/hadoopuser/anaconda2/lib/python2.7/site-packages/py4j/java_gateway.py", line 1160, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "/usr/local/spark-2.1.1/python/pyspark/sql/utils.py", line 79, in deco
    raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.IllegalArgumentException: u'Field "features" does not exist.'
hadoopuser@rackserver-PowerEdge-R220:~/workspace/RandomForest_CV$ 

请帮我解决pySpark中的这个问题。谢谢。

我在这里展示数据集的细节。不,我没有专门的功能列。下面是trainingData.take(5)的输出,它显示前5行数据集。

[行(V4366 = 0.0,V4460 = 0.232,V4916 = -0.017,V1495 = -0.104,V1639 = 0.005,V1967 = -0.008,V3049 = 0.177,V3746 = -0.675,V3869 = -3.451,V524 = 0.004,V5409 = 0),行(V4366 = 0.0,V4460 = 0.111,V4916 = -0.003,V1495 = -0.137,V1639 = 0.001,V1967 = -0.01,V3049 = 0.01,V3746 = -0.867,V3869 = -2.759,V524 = 0.0, V5409 = 0),行(V4366 = 0.0,V4460 = -0.391,V4916 = -0.003,V1495 = -0.155,V1639 = -0.006,V1967 = -0.019,V3049 = -0.706,V3746 = 0.166,V3869 = 0.189,V524 = 0.001,V5409 = 0),行(V4366 = 0.0,V4460 = 0.098,V4916 = -0.012,V1495 = -0.108,V1639 = 0.005,V1967 = -0.002,V3049 = 0.033,V3746 = -0.787,V3869 = -0.926行,V524 = 0.002,V5409 = 0),行(V4366 = 0.0,V4460 = 0.026,V4916 = -0.004,V1495 = -0.139,V1639 = 0.003,V1967 = -0.006,V3049 = -0.045,V3746 = -0.208,V3869 = -0.782,V524 = 0.001,V5409 = 0)]

其中V433到V524是功能。 V5409是类标签。

apache-spark pyspark apache-spark-sql spark-dataframe apache-spark-ml
2个回答
3
投票

Spark数据帧不像Spark ML那样使用;所有功能都需要在单个列中成为矢量,通常命名为features。以下是使用您提供的5行作为示例的方法:

spark.version
# u'2.2.0'

from pyspark.sql import Row
from pyspark.ml.linalg import Vectors

# your sample data:
temp_df = spark.createDataFrame([Row(V4366=0.0, V4460=0.232, V4916=-0.017, V1495=-0.104, V1639=0.005, V1967=-0.008, V3049=0.177, V3746=-0.675, V3869=-3.451, V524=0.004, V5409=0), Row(V4366=0.0, V4460=0.111, V4916=-0.003, V1495=-0.137, V1639=0.001, V1967=-0.01, V3049=0.01, V3746=-0.867, V3869=-2.759, V524=0.0, V5409=0), Row(V4366=0.0, V4460=-0.391, V4916=-0.003, V1495=-0.155, V1639=-0.006, V1967=-0.019, V3049=-0.706, V3746=0.166, V3869=0.189, V524=0.001, V5409=0), Row(V4366=0.0, V4460=0.098, V4916=-0.012, V1495=-0.108, V1639=0.005, V1967=-0.002, V3049=0.033, V3746=-0.787, V3869=-0.926, V524=0.002, V5409=0), Row(V4366=0.0, V4460=0.026, V4916=-0.004, V1495=-0.139, V1639=0.003, V1967=-0.006, V3049=-0.045, V3746=-0.208, V3869=-0.782, V524=0.001, V5409=0)])

trainingData=temp_df.rdd.map(lambda x:(Vectors.dense(x[0:-1]), x[-1])).toDF(["features", "label"])
trainingData.show()
# +--------------------+-----+ 
# |            features|label|
# +--------------------+-----+
# |[-0.104,0.005,-0....|    0| 
# |[-0.137,0.001,-0....|    0|
# |[-0.155,-0.006,-0...|    0|
# |[-0.108,0.005,-0....|    0|
# |[-0.139,0.003,-0....|    0|
# +--------------------+-----+

之后,您的管道应该运行正常(我假设您确实有多类分类,因为您的样本仅包含0作为标签),只更改rfevaluator中的标签列,如下所示:

rf = RandomForestClassifier(numTrees=100, maxDepth=5, maxBins=5, labelCol="label",featuresCol="features",seed=42)
evaluator = MulticlassClassificationEvaluator().setLabelCol("label").setPredictionCol("prediction").setMetricName("accuracy")

最后,print accuracy不起作用 - 你需要model.avgMetrics代替。


1
投票

我想把我的5美分加到desertnaut的答案 - 就像现在一样(Spark 2.2.0),有一个非常方便的VectorAssembler类来处理多列到一个矢量列的转换。然后代码看起来像这样:

from pyspark.sql import Row
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler

# your sample data:
temp_df = spark.createDataFrame([Row(V4366=0.0, V4460=0.232, V4916=-0.017, V1495=-0.104, V1639=0.005, V1967=-0.008, V3049=0.177, V3746=-0.675, V3869=-3.451, V524=0.004, V5409=0), Row(V4366=0.0, V4460=0.111, V4916=-0.003, V1495=-0.137, V1639=0.001, V1967=-0.01, V3049=0.01, V3746=-0.867, V3869=-2.759, V524=0.0, V5409=0), Row(V4366=0.0, V4460=-0.391, V4916=-0.003, V1495=-0.155, V1639=-0.006, V1967=-0.019, V3049=-0.706, V3746=0.166, V3869=0.189, V524=0.001, V5409=0), Row(V4366=0.0, V4460=0.098, V4916=-0.012, V1495=-0.108, V1639=0.005, V1967=-0.002, V3049=0.033, V3746=-0.787, V3869=-0.926, V524=0.002, V5409=0), Row(V4366=0.0, V4460=0.026, V4916=-0.004, V1495=-0.139, V1639=0.003, V1967=-0.006, V3049=-0.045, V3746=-0.208, V3869=-0.782, V524=0.001, V5409=0)])

assembler = VectorAssembler(
    inputCols=['V4366', 'V4460', 'V4916', 'V1495', 'V1639', 'V1967', 'V3049', 'V3746', 'V3869', 'V524'],
    outputCol='features')

trainingData = assembler.transform(temp_df)
trainingData.show()
# +------+------+------+------+------+------+-----+------+------+-----+-----+--------------------+
# | V1495| V1639| V1967| V3049| V3746| V3869|V4366| V4460| V4916| V524|V5409|            features|
# +------+------+------+------+------+------+-----+------+------+-----+-----+--------------------+
# |-0.104| 0.005|-0.008| 0.177|-0.675|-3.451|  0.0| 0.232|-0.017|0.004|    0|[0.0,0.232,-0.017...|
# |-0.137| 0.001| -0.01|  0.01|-0.867|-2.759|  0.0| 0.111|-0.003|  0.0|    0|[0.0,0.111,-0.003...|
# |-0.155|-0.006|-0.019|-0.706| 0.166| 0.189|  0.0|-0.391|-0.003|0.001|    0|[0.0,-0.391,-0.00...|
# |-0.108| 0.005|-0.002| 0.033|-0.787|-0.926|  0.0| 0.098|-0.012|0.002|    0|[0.0,0.098,-0.012...|
# |-0.139| 0.003|-0.006|-0.045|-0.208|-0.782|  0.0| 0.026|-0.004|0.001|    0|[0.0,0.026,-0.004...|
# +------+------+------+------+------+------+-----+------+------+-----+-----+--------------------+

这样,它可以很容易地集成为管道中的处理步骤。此处的重要区别还在于新的features列附加到数据框。

© www.soinside.com 2019 - 2024. All rights reserved.