如何创建一个新列,它是满足多个条件的特定范围内的列数之和

问题描述 投票:0回答:2

。我有一个看起来与此类似的数据框(除了VisitDeliv列的数量上升到Visit_12Deliv 12并且有几百个客户端 - 我在这里简化了它)

Client   Visit_1    Visit_2    Visit_3    Deliv_1  Deliv_2  Deliv_3 Key_DT
Client_1 2018-01-01 2018-01-20 2018-03-29 No       Yes      Yes     2018-01-15
Client_2 2018-01-10 2018-01-30 2018-02-10 Yes      Yes      No      2018-01-25
Client_3 2018-01-20 2018-04-01 2018-04-10 Yes      Yes      Yes     2018-04-15
Client_4 2018-01-30 2018-03-01 2018-03-10 Yes      No       Yes     2018-02-25
Client_5 2018-04-02 2018-04-07 2018-04-20 Yes      No       Yes     2018-04-01

我想创建一个名为Vis_sum的新列,对于所有在Key_DT之后但在2018-01-20之前有2018-03-25的客户,显示从Visit_1Visit_3的访问次数的总和,(i)在同一行中的Key_DT之后,(ii) )来自2018-03-25和(iii)在相关的Yes列中有Deliv(例如Deliv_1Visit_1相关)。它看起来应该是这样的

Client   Visit_1    Visit_2    Visit_3    Deliv_1  Deliv_2  Deliv_3 Key_DT     Vis_sum
Client_1 2018-01-01 2018-01-20 2018-03-29 No       Yes      Yes     2018-01-15 0
Client_2 2018-01-10 2018-01-30 2018-02-10 Yes      Yes      No      2018-01-25 1
Client_3 2018-01-20 2018-04-01 2018-04-10 Yes      Yes      Yes     2018-04-15 0
Client_4 2018-01-30 2018-03-01 2018-03-10 Yes      No       Yes     2018-02-25 1
Client_5 2018-04-02 2018-04-07 2018-04-20 Yes      No       Yes     2018-04-01 0

请注意 - 所有列中都缺少数据,因此必须考虑这一点。

我尝试了以下 - 但它不起作用。虽然部分(i)和(ii)的代码在一起尝试时起作用,并且(iii)的代码独立工作,但是当编写下面的所有代码时,它会为0列中的每一行返回Vis_sum

df.loc[((df.Key_DT < '2018-03-25') & (df.Key_DT >= '2018-01-20')), 'Vis_sum'] = ((df.filter(like='Visit_').gt(df.Key_DT,axis=0)) & (df.filter(like='Visit_').lt(pd.to_datetime('2018-03-25')).fillna(0).astype(bool)) & (df.filter(like='Deliv_').eq('Yes'))).sum(1)

python pandas datetime condition
2个回答
1
投票

我有一个类似的(非常混乱的调查)数据集,我使用meltmergegroupby-transform-cumcount来获得我想要的数字

假设您的数据集名为df:

#First melt the DF and the unique visits (you'll have to do this for all your value_vars)    
df1 = pd.melt(df,id_vars='Client',value_vars=['Visit_1','Visit_2','Visit_3'],var_name='Visit',value_name='Visit Date')
print(df1.head(5))
Client  Visit   Visit Date
0   Client_1    Visit_1 2018-01-01
1   Client_2    Visit_1 2018-01-10
2   Client_3    Visit_1 2018-01-20
3   Client_4    Visit_1 2018-01-30
4   Client_5    Visit_1 2018-04-02
#lets do the same for the deliveries 
df2 = pd.melt(df,id_vars='Client',value_vars=['Deliv_1','Deliv_2','Deliv_3'],var_name='Delivery',value_name='Check')

融化后,我们可以将您的值合并回表格式df。

# Lets merge these and then put the Key_DT back on 
res = pd.merge(df1,df2,on='Client')
res = pd.merge(res,df[['Client','Key_DT']],on='Client')
print(res.head(5))
        Client  Visit   Visit Date  Delivery    Check   Key_DT
0   Client_1    Visit_1 2018-01-01  Deliv_1 No  2018-01-15
1   Client_1    Visit_1 2018-01-01  Deliv_2 Yes 2018-01-15
2   Client_1    Visit_1 2018-01-01  Deliv_3 Yes 2018-01-15
3   Client_1    Visit_2 2018-01-20  Deliv_1 No  2018-01-15
4   Client_1    Visit_2 2018-01-20  Deliv_2 Yes 2018-01-15

我们可以根据您的条件进行过滤,并通过Client对值进行计数

s = res.loc[(res['Key_DT'] >= '2018-01-20') & (res['Key_DT'] <= '2018-03-25') & (res.Check == 'Yes')]
res['visit_sum'] = s.groupby(['Client','Visit'])['Check'].transform('cumcount')
res['visit_sum'] = res['visit_sum'].fillna(0)
print(res.loc[res['visit_sum'] > 0])
    Client  Visit   Visit Date  Delivery    Check   Key_DT  visit_sum
27  Client_4    Visit_1 2018-01-30  Deliv_1 Yes 2018-02-25  1.0
29  Client_4    Visit_1 2018-01-30  Deliv_3 Yes 2018-02-25  1.0
30  Client_4    Visit_2 2018-03-01  Deliv_1 Yes 2018-02-25  1.0
32  Client_4    Visit_2 2018-03-01  Deliv_3 Yes 2018-02-25  1.0
33  Client_4    Visit_3 2018-03-10  Deliv_1 Yes 2018-02-25  1.0
35  Client_4    Visit_3 2018-03-10  Deliv_3 Yes 2018-02-25  1.0

希望这会有所帮助,并使您朝着获得预期结果的方向前进。


1
投票

你写的代码不起作用,因为它不知道它应该匹配Visit_#Deliv_#。试试这个:

df.loc[((df.Key_DT < '2018-03-25') & (df.Key_DT >= '2018-01-20')), 'Vis_sum'] = ((df.filter(like='Visit_').gt(df.Key_DT,axis=0)) & (df.filter(like='Visit_').lt(pd.to_datetime('2018-03-25'),axis=0).fillna(0).astype(bool)) & (df.filter(like='Deliv_').rename(columns=lambda x: x.replace('Deliv','Visit')).eq('Yes'))).sum(1)
© www.soinside.com 2019 - 2024. All rights reserved.