我试图在dask中使用describe()
函数来获取数据的摘要统计信息。
但是,出现如下所示的错误
import dask.dataframe as dd
df = dd.read_csv('Measurement_table.csv',assume_missing=True)
df.describe().compute() #this works but when I try to groupby, i get an error
实际上,我正在尝试使下面的python pandas代码在dask的帮助下更快地工作
df.groupby(['person_id','measurement_concept_id','visit_occurrence_id'])['value_as_number']
.describe()
.unstack()
.swaplevel(0,1,axis=1)
.reindex(df['readings'].unique(), axis=1, level=0)
我尝试将compute()
添加到每个输出级,如下所示
df1 = df.groupby(['person_id','measurement_concept_id','visit_occurrence_id'])['value_as_number'].describe().compute().unstack().compute().swaplevel(0,1,axis=1).compute().reindex(df['readings'].unique(), axis=1, level=0).compute()
我收到以下错误,但same works well in pandas
任何人都可以帮助我解决此问题吗?
没有实现unstack
,但是describe
可以与apply
一起使用:
df = (sd.groupby(['subject_id','readings'])['val']
.apply(lambda x: x.describe())
.reset_index()
.rename(columns={'level_2':'func'})
.compute()
)
print (df)
subject_id readings func val
0 1 READ_1 count 2.000000
1 1 READ_1 mean 6.000000
2 1 READ_1 std 1.414214
3 1 READ_1 min 5.000000
4 1 READ_1 25% 5.500000
.. ... ... ... ...
51 4 READ_09 min 45.000000
52 4 READ_09 25% 45.000000
53 4 READ_09 50% 45.000000
54 4 READ_09 75% 45.000000
55 4 READ_09 max 45.000000
[112 rows x 4 columns]