我有两个带有多个索引的 DataFrame,名为
df_base
和 df_updates
。我想将这些 DataFrame 组合成一个 DataFrame 并保留多个索引。
>>> import numpy as np
>>> import pandas as pd
>>> df_base = pd.DataFrame(
... {
... "price": {
... ("2019-01-01", "1001"): 100,
... ("2019-01-01", "1002"): 100,
... ("2019-01-01", "1003"): 100,
... ("2019-01-02", "1001"): 100,
... ("2019-01-02", "1002"): 100,
... ("2019-01-02", "1003"): 100,
... ("2019-01-03", "1001"): 100,
... ("2019-01-03", "1002"): 100,
... ("2019-01-03", "1003"): 100,
... }
... },
... )
>>> df_base.index.names = ["date", "id"]
>>> df_base.convert_dtypes()
price
date id
2019-01-01 1001 100
1002 100
1003 100
2019-01-02 1001 100
1002 100
1003 100
2019-01-03 1001 100
1002 100
1003 100
>>>
>>> df_updates = pd.DataFrame(
... {
... "price": {
... ("2019-01-01", "1001"): np.nan,
... ("2019-01-01", "1002"): 100,
... ("2019-01-01", "1003"): 100,
... ("2019-01-02", "1001"): 100,
... ("2019-01-02", "1002"): 100,
... ("2019-01-02", "1003"): 100,
... ("2019-01-03", "1001"): 100,
... ("2019-01-03", "1002"): 100,
... ("2019-01-03", "1003"): 100,
... }
... }
... )
>>> df_updates.index.names = ["date", "id"]
>>> df_updates.convert_dtypes()
price
date id
2019-01-01 1001 <NA>
1002 99
1003 99
1004 100
我想将它们与以下规则结合起来:
我已经尝试使用
.join
但它引发了错误
>>> df_base.join(df_updates)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[48], line 21
...
ValueError: columns overlap but no suffix specified: Index(['price'], dtype='object')
即使我添加了后缀,它也只会让数据变得更加复杂(需要另一个解决方案)
我也已经尝试过使用
.update
,但是与基础索引不同的新数据未包含在结果中
>>> df_base.update(df_updates)
>>> df_base
price
date id
2019-01-01 1001 100.0
1002 99.0
1003 99.0
2019-01-02 1001 100.0
1002 100.0
1003 100.0
2019-01-03 1001 100.0
1002 100.0
1003 100.0
最后,我还尝试了一个“刁钻”的操作
>>> df_base.update(df_updates)
>>> df_base = df_updates.combine_first(df_base)
>>> df_base
price
date id
2019-01-01 1001 100.0
1002 99.0
1003 99.0
1004 100.0
2019-01-02 1001 100.0
1002 100.0
1003 100.0
2019-01-03 1001 100.0
1002 100.0
1003 100.0
这是我期望的结果,但我不确定这是否是这种情况下的最佳解决方案,我尝试使用
%timeit
,结果是
>>> %timeit df_base.update(df_updates)
345 µs ± 17.1 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
>>> %timeit df_updates.combine_first(df_base)
1.36 ms ± 10.3 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
使用大数据时,结果是
>>> %timeit df_base.update(df_updates)
2.38 ms ± 180 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
>>> %timeit df_updates.combine_first(df_base)
9.65 ms ± 400 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
这是适合我的情况的最佳解决方案还是有任何更有效/优化的功能(我期望一个单衬垫熊猫功能)?谢谢!
import numpy as np
import pandas as pd
df_base = pd.DataFrame(
{
"price": {
("2019-01-01", "1001"): 100,
("2019-01-01", "1002"): 100,
("2019-01-01", "1003"): 100,
("2019-01-02", "1001"): 100,
("2019-01-02", "1002"): 100,
("2019-01-02", "1003"): 100,
("2019-01-03", "1001"): 100,
("2019-01-03", "1002"): 100,
("2019-01-03", "1003"): 100,
}
},
)
df_base.index.names = ["date", "id"]
df_base.convert_dtypes()
df_updates = pd.DataFrame(
{
"price": {
("2019-01-01", "1001"): np.nan,
("2019-01-01", "1002"): 100,
("2019-01-01", "1003"): 100,
("2019-01-02", "1001"): 100,
("2019-01-02", "1002"): 100,
("2019-01-02", "1003"): 100,
("2019-01-03", "1001"): 100,
("2019-01-03", "1002"): 100,
("2019-01-03", "1003"): 100,
}
}
)
df_updates.index.names = ["date", "id"]
df_updates.convert_dtypes()
df_base.update(df_updates)
df_base = df_updates.combine_first(df_base)
df_base
您不需要先
update
然后combine_first
,您可以尝试仅组合不同的行:
df_base = df_updates.combine_first(df_base.drop(df_updates.index, errors='ignore'))
输出:
price
date id
2019-01-01 1001 NaN
1002 99.0
1003 99.0
1004 100.0
2019-01-02 1001 100.0
1002 100.0
1003 100.0
2019-01-03 1001 100.0
1002 100.0
1003 100.0