我有一个相当宽的数据集(700k 行和 100 多列),具有多个
entity_id
和多个日期时间间隔。attr
与不同的值相关联。specific_dt
的 entity_id
。attr
值。
下面是一个可重现的小示例
have = {'entity_id': [1,1,2,2],
'start_date': ['2014-12-01 00:00:00', '2015-03-01 00:00:00', '2018-02-12 00:00:00', '2019-02-01 00:00:00'],
'end_date': ['2015-02-28 23:59:59', '2015-05-31 23:59:59', '2019-01-31 23:59:59', '2023-05-28 23:59:59'],
'attr1': ['A', 'B', 'D', 'J']}
have = pd.DataFrame(data=have)
have
entity_id start_date end_date attr1
0 1 2014-12-01 00:00:00 2015-02-28 23:59:59 A
1 1 2015-03-01 00:00:00 2015-05-31 23:59:59 B
2 2 2018-02-12 00:00:00 2019-01-31 23:59:59 D
3 2 2019-02-01 00:00:00 2023-05-28 23:59:59 J
# Specific dates to integrate
specific_dt = ['2015-01-01 00:00:00', '2015-03-31 00:00:00']
预期输出如下
want
entity_id start_date end_date attr1
0 1 2014-12-01 2014-12-31 23:59:59 A
0 1 2015-01-01 2015-02-28 23:59:59 A
1 1 2015-03-01 2015-03-30 23:59:59 B
1 1 2015-03-31 2015-05-31 23:59:59 B
2 2 2018-02-12 2019-01-31 23:59:59 D
3 2 2019-02-01 2023-05-28 23:59:59 J
我已经能够使用以下代码实现所需的输出
# Create a list to store the new rows
new_rows = []
# Iterate through each row in the initial DataFrame
for index, row in have.iterrows():
id_val = row['entity_id']
start_date = pd.to_datetime(row['start_date'])
end_date = pd.to_datetime(row['end_date'], errors = 'coerce')
# Iterate through specific dates and create new rows
for date in specific_dt:
specific_date = pd.to_datetime(date)
# Check if the specific date is within the interval
if start_date < specific_date < end_date:
# Create a new row with all columns and append it to the list
new_row = row.copy()
new_row['start_date'] = start_date
new_row['end_date'] = specific_date - pd.Timedelta(seconds=1)
new_rows.append(new_row)
# Update the start_date for the next iteration
start_date = specific_date
# Add the last part of the original interval as a new row
new_row = row.copy()
new_row['start_date'] = start_date
new_row['end_date'] = end_date
new_rows.append(new_row)
# Create a new DataFrame from the list of new rows
want = pd.DataFrame(data=new_rows)
然而,对于我的工作数据集来说,它非常慢(10分钟以上)。 是否可以优化它(也许通过消除 for 循环)?
作为参考,我可以使用简单的数据步骤在几秒钟内完成此操作(下面的示例是要集成的两个特定日期之一)。
sas
data want;
set have;
by entity_id start_date end_date;
if start_date < "31MAR2015"d < end_date then
do;
retain _start _end;
_start = start_date;
_end = end_date;
end_date = "30MAR2015"d;
output;
start_date = "31MAR2015"d;
end_date = _end;
output;
end;
else output;
drop _start _end;
run;
have["start_date"] = pd.to_datetime(have["start_date"])
have["end_date"] = pd.to_datetime(have["end_date"])
specific_dt = [
pd.to_datetime("2015-01-01 00:00:00"),
pd.to_datetime("2015-03-31 00:00:00"),
]
l = [have]
for dt in specific_dt:
mask = (have["start_date"] < dt) & (have["end_date"] > dt)
new_df = have.loc[mask]
have.loc[mask, "end_date"] = dt - pd.Timedelta(minutes=1)
new_df.loc[:, "start_date"] = dt
l.append(new_df)
want = pd.concat(l).sort_values(["entity_id", "attr1"])