将数据帧从大熊猫转换为将pyspark转换为铸造的数据类型

问题描述 投票:3回答:1

对于那些从事Foundry环境工作的人,我正在尝试在“代码存储库”中构建一条管道,以将原始数据集(来自Excel文件)处理为干净的数据集,稍后将在“轮廓”中对其进行分析。为此,我使用了python,但管道似乎正在使用pyspark,在某些时候,我必须将用熊猫清理过的数据集转换为pyspark,这就是我遇到的问题。

我看过一些关于stackover flow的文章,旨在将Pandas DF转换为Pyspark DF,但是到目前为止,所有解决方案似乎都没有用。当我尝试运行转换时,尽管我强制执行模式,但总是存在无法转换的数据类型。

Python代码部分已在Spyder中成功测试(导入和导出具有Excel文件),并给出了预期的结果。只有在我需要转换为pyspark时,它才会以某种方式失败。

@transform_pandas(
    Output("/MDM_OUT_OF_SERVICE_EVENTS_CLEAN"),
    OOS_raw=Input("/MDM_OUT_OF_SERVICE_EVENTS"),
)
def DA_transform(OOS_raw):

''' Code Section in Python '''

  mySchema=StructType([StructField(OOS_dup.columns[0], IntegerType(), 
                   True),
                   StructField(OOS_dup.columns[1], StringType(), True),
                   ...])

  OOS_out=sqlContext.createDataFrame(OOS_dup,schema 
    =mySchema,verifySchema=False)

return OOS_out

我有时收到此错误消息:

AttributeError: 'unicode' object has no attribute 'toordinal'.

据此帖子:What is causing 'unicode' object has no attribute 'toordinal' in pyspark?

这是因为pyspark无法将数据转换为Datetype

但数据以熊猫为单位在Datetime64[ns]中。我尝试过将此列转换为字符串和整数,但也失败。

这里是Python输出数据集的图片:enter image description here

这里是清除数据集后熊猫返回的数据类型:

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4972 entries, 0 to 4971
Data columns (total 51 columns):
OOS_ID                       4972 non-null int64
OPERATOR_CODE                4972 non-null object
ATA_CAUSE                    4972 non-null int64
EVENT_CODE                   3122 non-null object
AC_MODEL                     4972 non-null object
AC_SN                        4972 non-null int64
OOS_DATE                     4972 non-null datetime64[ns]
AIRPORT_CODE                 4915 non-null object
RTS_DATE                     4972 non-null datetime64[ns]
EVENT_TYPE                   4972 non-null object
CORRECTIVE_ACTION            417 non-null object
DD_HOURS_OOS                 4972 non-null float64
EVENT_DESCRIPTION            4972 non-null object
EVENT_CATEGORY               4972 non-null object
ATA_REPORTED                 324 non-null float64
TOTAL_CAUSES                 4875 non-null float64
EVENT_NUMBER                 3117 non-null float64
RTS_TIME                     4972 non-null object
OOS_TIME                     4972 non-null object
PREV_REPORTED                4972 non-null object
FERRY_IND                    4972 non-null object
REPAIR_STN_CODE              355 non-null object
MAINT_DOWN_TIME              4972 non-null float64
LOGBOOK_RECORD_IDENTIFIER    343 non-null object
RTS_IND                      4972 non-null object
READY_FOR_USE                924 non-null object
DQ_COMMENTS                  2 non-null object
REVIEWED                     5 non-null object
DOES_NOT_MEET_SPECS          4 non-null object
CORRECTED                    12 non-null object
EDITED_BY                    4972 non-null object
EDIT_DATE                    4972 non-null datetime64[ns]
OUTSTATION_INDICATOR         3801 non-null object
COMMENT_TEXT                 11 non-null object
ATA_CAUSE_CHAPTER            4972 non-null int64
ATA_CAUSE_SECTION            4972 non-null int64
ATA_CAUSE_COMPONENT          770 non-null float64
PROCESSOR_COMMENTS           83 non-null object
PARTS_AVAIL_AT_STATION       4972 non-null object
PARTS_SHIPPED_AT_STATION     4972 non-null object
ENGINEER_AT_STATION          4972 non-null object
ENGINEER_SENT_AT_STATION     4972 non-null object
SOURCE_FILE                  4972 non-null object
OOS_Month                    4972 non-null float64
OOS_Hour                     4972 non-null float64
OOS_Min                      4972 non-null float64
RTS_Month                    4972 non-null float64
RTS_Hour                     4972 non-null float64
RTS_Min                      4972 non-null float64
OOS_Timestamp                4972 non-null datetime64[ns]
RTS_Timestamp                4972 non-null datetime64[ns]
dtypes: datetime64[ns](5), float64(12), int64(5), object(29)
python pandas pyspark cloudfoundry
1个回答
0
投票

如果它可能对某些人有帮助,我在官方的Foundry文档中找到了有关如何在熊猫和pyspark DF之间正确过渡的信息。

OOS_dup是我想转换回Spark的Pandas数据框。

# Extract the name of each columns with its data type in pandas
    col = OOS_dup.columns
    col_type = list()

    for c in col:
        t = OOS_dup[c].dtype.name
        col_type.append(t)

    df_schema = pd.DataFrame({"field": col, "data_type": col_type})

    # Define a function to replace missing (NaN sky coverage cells with Null
    def replace_missing(df, col_names):
        for col in col_names:
            df = df.withColumn("{}".format(col),
                               F.when(df["{}".format(col)] == "NaN", None).otherwise(df["{}".format(col)]))
        return df

    # Replace missing values
    OOS_dup = replace_missing(OOS_dup, col)

    # Define a function to change column types to the proper type in spark
    def change_type(df, col_names, dtypes):
        for col in col_names:
            df = df.withColumn("{}".format(col), F.when(dtypes == "float64", (df["{}".format(col)]).cast("double")).when(dtypes == "int64", (df["{}".format(col)]).cast("int")).when(dtypes == "datetime64[ns]", (df["{}".format(col)]).cast("date")).otherwise((df["{}".format(col)]).cast("string")))
        return df

    # Cast each columns to the proper data type
    OOS_dup = change_type(OOS_dup, df_schema["field"], df_schema["data_type"])

    OOS_dup = sqlContext.createDataFrame(OOS_dup)
© www.soinside.com 2019 - 2024. All rights reserved.