有效地聚合日期时间的大熊猫重采样的集合

问题描述 投票:7回答:1

鉴于以下数据集作为一个大熊猫数据帧DF:

index(as DateTime object) |  Name        |  Amount    |  IncomeOutcome
---------------------------------------------------------------
2019-01-28                |  Customer1   |  200.0     |  Income
2019-01-31                |  Customer1   |  200.0     |  Income
2019-01-31                |  Customer2   |  100.0     |  Income
2019-01-28                |  Customer2   |  -100.0    |  Outcome
2019-01-31                |  Customer2   |  -100.0    |  Outcome

我们执行下列步骤:

grouped = df.groupby("Name", "IncomeOutcome")
sampled_by_month = grouped.resample("M")
aggregated = sampled_by_month.agg({"MonthlyCount": "size", "Amount": "sum"})

所需的输出应该是这样的:

Name       |  IncomeOutcome   |  Amount    |  MonthlyCount
------------------------------------------------------------
Customer1  |  Income          |  400.0     |  2
Customer2  |  Income          |  100.0     |  1
Customer2  |  Outcome         |  -200.0    |  2

最后一步执行非常差,可能与Pandas Issue #20660我的第一个目的是为了所有datetime对象转换为Int64的,这让我对如何按月重新采样转换数据的问题。

在这个问题上有什么建议?

先感谢您

python pandas performance numpy
1个回答
5
投票

也许我们可以通过只有对单个列(“额”,感兴趣的列)做了采样优化的解决方案。

(df.groupby(["Name", "IncomeOutcome"])['Amount']
   .resample("M")
   .agg(['sum','size'])
   .rename({'sum':'Amount', 'size': 'MonthlyCount'}, axis=1)
   .reset_index(level=-1, drop=True)
   .reset_index())

        Name IncomeOutcome  Amount  MonthlyCount
0  Customer1        Income   400.0             2
1  Customer2        Income   100.0             1
2  Customer2       Outcome  -200.0             2

如果这仍然太慢,那么我想这个问题可能是因为resample作为groupby内会减慢速度。或许你可以尝试通过所有3个谓语用单groupby呼叫分组。对于日期重新采样,尝试pd.Grouper

(df.groupby(['Name', 'IncomeOutcome', pd.Grouper(freq='M')])['Amount']
   .agg([ ('Amount', 'sum'), ('MonthlyCount', 'size')])
   .reset_index(level=-1, drop=True)
   .reset_index())

        Name IncomeOutcome  Amount  MonthlyCount
0  Customer1        Income   400.0             2
1  Customer2        Income   100.0             1
2  Customer2       Outcome  -200.0             2

在性能方面,这应该快出来了。


性能

让我们尝试进行测试的目的建立一个更一般的数据帧。

# Setup
df_ = df.copy()
df1 = pd.concat([df_.reset_index()] * 100, ignore_index=True)
df = pd.concat([
        df1.replace({'Customer1': f'Customer{i}', 'Customer2': f'Customer{i+1}'}) 
        for i in range(1, 98, 2)], ignore_index=True) 
df = df.set_index('index')

df.shape
# (24500, 3)

%%timeit 
(df.groupby(["Name", "IncomeOutcome"])['Amount']
   .resample("M")
   .agg(['sum','size'])
   .rename({'sum':'Amount', 'size': 'MonthlyCount'}, axis=1)
   .reset_index(level=-1, drop=True)
   .reset_index())

%%timeit
(df.groupby(['Name', 'IncomeOutcome', pd.Grouper(freq='M')])['Amount']
   .agg([ ('Amount', 'sum'), ('MonthlyCount', 'size')])
   .reset_index(level=-1, drop=True)
   .reset_index())

1.71 s ± 85.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
24.2 ms ± 1.82 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
© www.soinside.com 2019 - 2024. All rights reserved.