如何在大熊猫中反叛而不是分组间隔

问题描述 投票:1回答:1

我有一个dfStartDate和结束EndDate

df.loc[:,['StartDate','EndDate']].head()
Out[92]: 
                    StartDate                    EndDate
0 2016-05-19 14:19:14.820002 2016-05-19 14:19:17.899999
1 2016-05-19 14:19:32.119999 2016-05-19 14:19:37.020002

我想获得一个任意频率的df2,以及每个bin中包含在(StartDate,EndDate)间隔之间的那个bin中的时间量,例如:

df2 ('1s')
2016-05-19 14:19:14.000000              0.179998
2016-05-19 14:19:15.000000              1
2016-05-19 14:19:16.000000              1
2016-05-19 14:19:17.000000              0.89999
2016-05-19 14:19:18.000000              0

当然,

groupby(StartDate.date.dt)['Duration']

哪里'Duration' is 'EndDate'-'StartDate'不起作用

python pandas datetime intervals
1个回答
2
投票
import numpy as np
import pandas as pd
df = pd.DataFrame({'StartDate':['2016-05-19 14:19:14.820002','2016-05-19 14:19:32.119999', '2016-05-19 14:19:17.899999'],
                   'EndDate':['2016-05-19 14:19:17.899999', '2016-05-19 14:19:37.020002', '2016-05-19 14:19:18.5']})

df2 = pd.melt(df, var_name='type', value_name='date')
df2['date'] = pd.to_datetime(df2['date'])
df2['sign'] = np.where(df2['type']=='StartDate', 1, -1)
min_date = df2['date'].min().to_period('1s').to_timestamp()
max_date = (df2['date'].max() + pd.Timedelta('1s')).to_period('1s').to_timestamp()
index = pd.date_range(min_date, df2['date'].max(), freq='1s').union(df2['date'])
df2 = df2.groupby('date').sum()
df2 = df2.reindex(index)
df2['weight'] = df2['sign'].fillna(0).cumsum()
df2['duration'] = 0
df2.iloc[:-1, df2.columns.get_loc('duration')] = (df2.index[1:] - df2.index[:-1]).total_seconds()
df2['duration'] = df2['duration'] * df2['weight']
df2 = df2.resample('1s').sum()

print(df2)

产量

                     sign  weight  duration
2016-05-19 14:19:14   1.0     1.0  0.179998
2016-05-19 14:19:15   0.0     1.0  1.000000
2016-05-19 14:19:16   0.0     1.0  1.000000
2016-05-19 14:19:17   0.0     3.0  1.000000
2016-05-19 14:19:18  -1.0     1.0  0.500000
2016-05-19 14:19:19   0.0     0.0  0.000000
2016-05-19 14:19:20   0.0     0.0  0.000000
2016-05-19 14:19:21   0.0     0.0  0.000000
2016-05-19 14:19:22   0.0     0.0  0.000000
2016-05-19 14:19:23   0.0     0.0  0.000000
2016-05-19 14:19:24   0.0     0.0  0.000000
2016-05-19 14:19:25   0.0     0.0  0.000000
2016-05-19 14:19:26   0.0     0.0  0.000000
2016-05-19 14:19:27   0.0     0.0  0.000000
2016-05-19 14:19:28   0.0     0.0  0.000000
2016-05-19 14:19:29   0.0     0.0  0.000000
2016-05-19 14:19:30   0.0     0.0  0.000000
2016-05-19 14:19:31   0.0     0.0  0.000000
2016-05-19 14:19:32   1.0     1.0  0.880001
2016-05-19 14:19:33   0.0     1.0  1.000000
2016-05-19 14:19:34   0.0     1.0  1.000000
2016-05-19 14:19:35   0.0     1.0  1.000000
2016-05-19 14:19:36   0.0     1.0  1.000000
2016-05-19 14:19:37  -1.0     1.0  0.020002

主要思想是将StartDateEndDates放在一个列中,并为每个StartDate分配+1给每个-1EndDate

df2 = pd.melt(df, var_name='type', value_name='date')
df2['date'] = pd.to_datetime(df2['date'])
df2['sign'] = np.where(df2['type']=='StartDate', 1, -1)
#         type                       date  sign
# 0  StartDate 2016-05-19 14:19:14.820002     1
# 1  StartDate 2016-05-19 14:19:32.119999     1
# 2    EndDate 2016-05-19 14:19:17.899999    -1
# 3    EndDate 2016-05-19 14:19:37.020002    -1

现在将date作为索引,然后重新索引DataFrame以包含1秒频率的所有时间戳:

min_date = df2['date'].min().to_period('1s').to_timestamp()
max_date = (df2['date'].max() + pd.Timedelta('1s')).to_period('1s').to_timestamp()
index = pd.date_range(min_date, df2['date'].max(), freq='1s').union(df2['date'])
df2 = df2.set_index('date')
df2 = df2.reindex(index)

#                                  type  sign
# 2016-05-19 14:19:14.000000        NaN   NaN
# 2016-05-19 14:19:14.820002  StartDate   1.0
# 2016-05-19 14:19:15.000000        NaN   NaN
# 2016-05-19 14:19:16.000000        NaN   NaN
# 2016-05-19 14:19:17.000000        NaN   NaN
# 2016-05-19 14:19:17.899999    EndDate  -1.0
# 2016-05-19 14:19:18.000000        NaN   NaN
# ...

sign列中,将NaN值填充为0并计算累积和:

df2['weight'] = df2['sign'].fillna(0).cumsum()
#                                  type  sign  weight
# 2016-05-19 14:19:14.000000        NaN   NaN     0.0
# 2016-05-19 14:19:14.820002  StartDate   1.0     1.0
# 2016-05-19 14:19:15.000000        NaN   NaN     1.0
# 2016-05-19 14:19:16.000000        NaN   NaN     1.0
# 2016-05-19 14:19:17.000000        NaN   NaN     1.0
# 2016-05-19 14:19:17.899999    EndDate  -1.0     0.0
# 2016-05-19 14:19:18.000000        NaN   NaN     0.0
# ...

计算每行之间的持续时间:

df2['duration'] = 0
df2.iloc[:-1, df2.columns.get_loc('duration')] = (df2.index[1:] - df2.index[:-1]).total_seconds()
df2['duration'] = df2['duration'] * df2['weight']

#                                  type  sign  weight  duration
# 2016-05-19 14:19:14.000000        NaN   NaN     0.0  0.000000
# 2016-05-19 14:19:14.820002  StartDate   1.0     1.0  0.179998
# 2016-05-19 14:19:15.000000        NaN   NaN     1.0  1.000000
# 2016-05-19 14:19:16.000000        NaN   NaN     1.0  1.000000
# 2016-05-19 14:19:17.000000        NaN   NaN     1.0  0.899999
# 2016-05-19 14:19:17.899999    EndDate  -1.0     0.0  0.000000
# 2016-05-19 14:19:18.000000        NaN   NaN     0.0  0.000000

最后,将DataFrame重新采样为1秒频率

df2 = df2.resample('1s').sum()

我从DSM, here学到了这个技巧。

© www.soinside.com 2019 - 2024. All rights reserved.