熊猫选择性地丢弃连续的重复项

问题描述 投票:1回答:1

我一直在寻找有关如何在熊猫数据框中有选择地删除连续重复项的所有问题/答案,仍然无法找出以下情况:

import pandas as pd
import numpy as np

def random_dates(start, end, n, freq, seed=None):
    if seed is not None:
        np.random.seed(seed)

    dr = pd.date_range(start, end, freq=freq)
    return pd.to_datetime(np.sort(np.random.choice(dr, n, replace=False)))

date = random_dates('2018-01-01', '2018-01-12', 20, 'H', seed=[3, 1415])

data = {'Timestamp': date, 
        'Message': ['Message received.','Sending...', 'Sending...', 'Sending...', 'Work in progress...', 'Work in progress...', 
                    'Message received.','Sending...', 'Sending...','Work in progress...',
                    'Message received.','Sending...', 'Sending...', 'Sending...','Work in progress...', 'Work in progress...', 'Work in progress...',
                    'Message received.','Sending...', 'Sending...']}

df = pd.DataFrame(data, columns = ['Timestamp', 'Message'])

我有以下数据框:

             Timestamp              Message
0  2018-01-02 03:00:00    Message received.
1  2018-01-02 11:00:00           Sending...
2  2018-01-03 04:00:00           Sending...
3  2018-01-04 11:00:00           Sending...
4  2018-01-04 16:00:00  Work in progress...
5  2018-01-04 17:00:00  Work in progress...
6  2018-01-05 05:00:00    Message received.
7  2018-01-05 11:00:00           Sending...
8  2018-01-05 17:00:00           Sending...
9  2018-01-06 02:00:00  Work in progress...
10 2018-01-06 14:00:00    Message received.
11 2018-01-07 07:00:00           Sending...
12 2018-01-07 20:00:00           Sending...
13 2018-01-08 01:00:00           Sending...
14 2018-01-08 02:00:00  Work in progress...
15 2018-01-08 15:00:00  Work in progress...
16 2018-01-09 00:00:00  Work in progress...
17 2018-01-10 03:00:00    Message received.
18 2018-01-10 09:00:00           Sending...
19 2018-01-10 14:00:00           Sending...

我只想在'Message'为'Work in progress ...'并保留第一个实例(例如,这里的索引5、15和16需要删除)时,才将df ['Message']列中的连续重复项删除。 ,理想情况下,我希望获得:

             Timestamp              Message
0  2018-01-02 03:00:00    Message received.
1  2018-01-02 11:00:00           Sending...
2  2018-01-03 04:00:00           Sending...
3  2018-01-04 11:00:00           Sending...
4  2018-01-04 16:00:00  Work in progress...
6  2018-01-05 05:00:00    Message received.
7  2018-01-05 11:00:00           Sending...
8  2018-01-05 17:00:00           Sending...
9  2018-01-06 02:00:00  Work in progress...
10 2018-01-06 14:00:00    Message received.
11 2018-01-07 07:00:00           Sending...
12 2018-01-07 20:00:00           Sending...
13 2018-01-08 01:00:00           Sending...
14 2018-01-08 02:00:00  Work in progress...
17 2018-01-10 03:00:00    Message received.
18 2018-01-10 09:00:00           Sending...
19 2018-01-10 14:00:00           Sending...

我已经尝试过类似帖子中提供的解决方案,例如:

df['Message'].loc[df['Message'].shift(-1) != df['Message']]

我还计算了消息的长度:

df['length'] = df['Message'].apply(lambda x: len(x))

并且写了一个有条件的放置为:

df.loc[(df['length'] ==17) | (df['length'] ==10) | ~df['Message'].duplicated(keep='first')]

看起来更好,但索引14、15和16仍被完全删除,因此行为不佳,请参阅:

             Timestamp              Message  length
0  2018-01-02 03:00:00    Message received.      17
1  2018-01-02 11:00:00           Sending...      10
2  2018-01-03 04:00:00           Sending...      10
3  2018-01-04 11:00:00           Sending...      10
4  2018-01-04 16:00:00  Work in progress...      19
6  2018-01-05 05:00:00    Message received.      17
7  2018-01-05 11:00:00           Sending...      10
8  2018-01-05 17:00:00           Sending...      10
10 2018-01-06 14:00:00    Message received.      17
11 2018-01-07 07:00:00           Sending...      10
12 2018-01-07 20:00:00           Sending...      10
13 2018-01-08 01:00:00           Sending...      10
17 2018-01-10 03:00:00    Message received.      17
18 2018-01-10 09:00:00           Sending...      10
19 2018-01-10 14:00:00           Sending...      10

感谢您的时间和帮助!

python pandas duplicates data-manipulation
1个回答
1
投票

首先过滤比较Series.shift的第一个连续值,并过滤掉没有Series.shift值的所有行的链掩码:

Work in progress...
© www.soinside.com 2019 - 2024. All rights reserved.