从Pandas pivot_table行中删除异常值

问题描述 投票:1回答:3

我目前正在研究一个问题,需要查看一些购买的零件,并确定我们是否成功地努力降低成本。

我遇到了一些问题。由于我们的购买者可以选择在任何给定数量的度量单位(UOM)中输入订单,但并不总是记得输入转换因子,因此我们有时遇到如下数据框中所示的问题

df = pd.DataFrame(
    [
    ['AABBCCDD','2014/2015','Q2',31737.60],
    ['AABBCCDD','2014/2015','Q2',31737.60],
    ['AABBCCDD','2014/2015','Q2',31737.60],
    ['AABBCCDD','2014/2015','Q3',89060.84],
    ['AABBCCDD','2015/2016','Q3',71586.00],
    ['AABBCCDD','2016/2017','Q3',89060.82],
    ['AABBCCDD','2017/2018','Q3',98564.40],
    ['AABBCCDD','2017/2018','Q3',110691.24],
    ['AABBCCDD','2017/2018','Q4',93390.00],
    ['AABBCCDD','2018/2019','Q2',90420.00],
    ['AABBCCDD','2018/2019','Q3',13.08],
    ['AABBCCDD','2018/2019','Q3',13.08]
    ], 
    columns=['PART_NO','FiscalYear','FiscalQuarter','Price'])

如您所知,最后两次购买的单位成本显着降低。这是由于我们之前购买了一件全件商品,而现在购买者选择以平方英寸的材料输入订单

现在..正确的行动是去购买者并让他/她解决问题。我想事先了解这些问题

我试过转动数据

df_tab = pd.pivot_table(df, values='Price', index=['PART_NO'], columns=['FiscalYear','FiscalQuarter'], aggfunc=np.mean)

导致以下结果:

enter image description here

当然,我有数千个部件可以放在这个数据框中,有一行pr部件号。它可能会按日期而不是季度,因此上述内容仅用于简化目的。

我将如何进行以下两种情况

  1. 用空值替换异常值,在末尾创建一列
  2. 指定在给定的数据行中存在异常值。

-------------编辑--------------

我使用了下面的建议和其他一些灵感,并得出以下解决方案

# Imports
import pyodbc
import urllib
from sql import SQL
import pandas as pd
from sqlalchemy import create_engine

# Set variables
upperQuantile = 0.8
lowerQuantile = 0.2

# Connect to server / database
params = urllib.parse.quote_plus("Driver={SQL Server Native Client 11.0};Server=LT02670;Database=staging;Trusted_Connection=yes;")
engine = create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)

# Create dataframe containing raw data
df = pd.read_sql(SQL(), engine)

# define upper and lower quartile ranges for outlier detection
def q1(x):
    return x.quantile(lowerQuantile)

def q2(x):
    return x.quantile(upperQuantile)

# define function for sorting out outliers
f = {'PO_UNIT_PRICE_CURRENT_CURRENCY': ['median', 'std', q1,q2]}

# group data and add function to data (adds columns median, std, q1 and q2)
dfgrp = df.groupby(['PART_NO']).agg(f).reset_index()

# Isolate part numbers in dataframe
dfgrpPart = pd.DataFrame(dfgrp['PART_NO'])

# Isolate value columns in dataframe
dfgrpStat = dfgrp['PO_UNIT_PRICE_CURRENT_CURRENCY']

# Join categorical data with values (this is done in order to eliminate multiindex caused py groupby function)
dfgrp = dfgrpPart.join(dfgrpStat)

# Add new columns to raw data extract
df = df.join(dfgrp.set_index('PART_NO'), on='PART_NO').reset_index()

# Remove outliers and 0-values
idx = df[df['PO_UNIT_PRICE_CURRENT_CURRENCY'] < df['q1']].index
df.drop(idx, inplace=True)
idx = df[df['PO_UNIT_PRICE_CURRENT_CURRENCY'] > df['q2']].index
df.drop(idx, inplace=True)
idx = df[df['PO_UNIT_PRICE_CURRENT_CURRENCY'] <= 0].index
df.drop(idx, inplace=True)

# Split dataframe into fiscal year chunks, and build lists of part numbers
df_14_15 = df[df['FiscalYear'].str.match('2014/2015')]['PART_NO'].to_list()
# df_15_16 = df[df['FiscalYear'].str.match('2015/2016')]['PART_NO'].to_list()
df_16_17 = df[df['FiscalYear'].str.match('2016/2017')]['PART_NO'].to_list()
# df_17_18 = df[df['FiscalYear'].str.match('2017/2018')]['PART_NO'].to_list()
df_18_19 = df[df['FiscalYear'].str.match('2018/2019')]['PART_NO'].to_list()
df_19_20 = df[df['FiscalYear'].str.match('2019/2020')]['PART_NO'].to_list()

# create one list of unique part numbers from multiple years, i have chosen only some years, as we rarely order the same parts six years running
partsList = list(set(df_14_15) & set(df_16_17) & set(df_18_19))

# Use list of part numbers to filter out raw data into output dataframe
dfAllYears = df[df['PART_NO'].isin(partsList)]

# write data to excel file for further analysis, this will overwrite existing file so be careful
dfAllYears.to_excel("output.xlsx", index=False, sheet_name='Data')

这使我能够进行分析并继续前进。

我对代码并不完全满意,并且相信我可能会做一些不必要的复杂事情,而不是充分利用熊猫

python pandas lambda pivot-table
3个回答
1
投票

要正确判断是否有异常值,您需要添加一些统计信息。但是,这有点超出了你需要做的范围。

我建议只是降序排序并查看数据框中的顶部值。

你可以这样做:

df = df.sort_values('Price').reset_index()

要用null替换这些值,您可以只关注索引并选择范围内的所有Price值,并将它们设置为None


1
投票

您可以这样做的一种方法是在这种情况下过滤具有极值(> 10%)的列,但通过更改低和高,您可以设置极值的边界。之后,您可以使用nan替换那些低值和高值的值,然后将在这种情况下为异常值的列子集作为单独的DataFrame。

from scipy import stats
import pandas as pd
import numpy as np

df = pd.DataFrame(
    [
        ['AABBCCDD', '2014/2015', 'Q2', 31737.60],
        ['AABBCCDD', '2014/2015', 'Q2', 31737.60],
        ['AABBCCDD', '2014/2015', 'Q2', 31737.60],
        ['AABBCCDD', '2014/2015', 'Q3', 89060.84],
        ['AABBCCDD', '2015/2016', 'Q3', 71586.00],
        ['AABBCCDD', '2016/2017', 'Q3', 89060.82],
        ['AABBCCDD', '2017/2018', 'Q3', 98564.40],
        ['AABBCCDD', '2017/2018', 'Q3', 110691.24],
        ['AABBCCDD', '2017/2018', 'Q4', 93390.00],
        ['AABBCCDD', '2018/2019', 'Q2', 90420.00],
        ['AABBCCDD', '2018/2019', 'Q3', 13.08],
        ['AABBCCDD', '2018/2019', 'Q3', 13.08]
    ],
    columns=['PART_NO', 'FiscalYear', 'FiscalQuarter', 'Price'])

filt_df = df.loc[:, df.columns == 'Price']

low = .05
high = .95
quant_df = filt_df.quantile([low, high])
print(quant_df)

filt_df = filt_df.apply(lambda x: x[(x > quant_df.loc[low, x.name]) &
                                    (x < quant_df.loc[high, x.name])], axis=0)

filt_df = pd.concat([df.loc[:, 'PART_NO'], filt_df], axis=1)
filt_df = pd.concat([df.loc[:, 'FiscalYear'], filt_df], axis=1)
filt_df = pd.concat([df.loc[:, 'FiscalQuarter'], filt_df], axis=1)

Outliers = filt_df[filt_df.isnull().any(axis=1)]

print(Outliers)

输出:

   FiscalQuarter FiscalYear   PART_NO  Price
7             Q3  2017/2018  AABBCCDD    NaN
10            Q3  2018/2019  AABBCCDD    NaN
11            Q3  2018/2019  AABBCCDD    NaN

在这种情况下,我不确定索引7是对还是错。但是你可以指定你想要的任何边界,只要它们介于0和1之间。然后使用过滤后的DataFrame查看并查看哪些最突出。


1
投票

我认为将每个价格与PART_NO的平均值进行比较将很容易显示(假设价格不会经常波动)。

import pandas as pd

df = pd.DataFrame(
[
    ['AABBCCDD', '2014/2015', 'Q2', 31737.60],
    ['AABBCCDD', '2014/2015', 'Q2', 31737.60],
    ['AABBCCDD', '2014/2015', 'Q2', 31737.60],
    ['AABBCCDD', '2014/2015', 'Q3', 89060.84],
    ['AABBCCDD', '2015/2016', 'Q3', 71586.00],
    ['AABBCCDD', '2016/2017', 'Q3', 89060.82],
    ['AABBCCDD', '2017/2018', 'Q3', 98564.40],
    ['AABBCCDD', '2017/2018', 'Q3', 110691.24],
    ['AABBCCDD', '2017/2018', 'Q4', 93390.00],
    ['AABBCCDD', '2018/2019', 'Q2', 90420.00],
    ['AABBCCDD', '2018/2019', 'Q3', 13.08],
    ['AABBCCDD', '2018/2019', 'Q3', 13.08]
],
columns=['PART_NO', 'FiscalYear', 'FiscalQuarter', 'Price'])


avg_df = df.groupby('PART_NO').mean(['Price'].to_frame().reset_index().rename(columns={'Price': 'AVG_PRICE'})

df = df.merge(avg_df)

df['ratio'] = df['AVG_PRICE']/df['Price']

输出:

     PART_NO FiscalYear FiscalQuarter      Price     AVG_PRICE        ratio
0   AABBCCDD  2014/2015            Q2   31737.60  61501.021667     1.937797
1   AABBCCDD  2014/2015            Q2   31737.60  61501.021667     1.937797
2   AABBCCDD  2014/2015            Q2   31737.60  61501.021667     1.937797
3   AABBCCDD  2014/2015            Q3   89060.84  61501.021667     0.690551
4   AABBCCDD  2015/2016            Q3   71586.00  61501.021667     0.859121
5   AABBCCDD  2016/2017            Q3   89060.82  61501.021667     0.690551
6   AABBCCDD  2017/2018            Q3   98564.40  61501.021667     0.623968
7   AABBCCDD  2017/2018            Q3  110691.24  61501.021667     0.555609
8   AABBCCDD  2017/2018            Q4   93390.00  61501.021667     0.658540
9   AABBCCDD  2018/2019            Q2   90420.00  61501.021667     0.680171
10  AABBCCDD  2018/2019            Q3      13.08  61501.021667  4701.912971
11  AABBCCDD  2018/2019            Q3      13.08  61501.021667  4701.912971

异常值的比例很大。如果你过滤df.ratio > 5或你决定的任何数字,那将获得你想要的所有记录。

© www.soinside.com 2019 - 2024. All rights reserved.