我正在尝试计算大型数据集中“力”列的平均值和置信区间(95%)。我需要通过使用 groupby 函数对不同的“类”进行分组来获得结果。
当我计算平均值并将其放入新数据框中时,它为我提供所有行的 NaN 值。我不确定我是否走在正确的道路上。有没有更简单的方法可以做到这一点?
这是示例数据框:
df=pd.DataFrame({ 'Class': ['A1','A1','A1','A2','A3','A3'],
'Force': [50,150,100,120,140,160] },
columns=['Class', 'Force'])
为了计算置信区间,我所做的第一步是计算平均值。这是我用过的:
F1_Mean = df.groupby(['Class'])['Force'].mean()
这给了我所有行的
NaN
值。
2021 年 10 月 25 日更新:@a-donda 指出,95% 应基于平均值的 1.96 X 标准差。
import pandas as pd
import numpy as np
import math
df=pd.DataFrame({'Class': ['A1','A1','A1','A2','A3','A3'],
'Force': [50,150,100,120,140,160] },
columns=['Class', 'Force'])
print(df)
print('-'*30)
stats = df.groupby(['Class'])['Force'].agg(['mean', 'count', 'std'])
print(stats)
print('-'*30)
ci95_hi = []
ci95_lo = []
for i in stats.index:
m, c, s = stats.loc[i]
ci95_hi.append(m + 1.96*s/math.sqrt(c))
ci95_lo.append(m - 1.96*s/math.sqrt(c))
stats['ci95_hi'] = ci95_hi
stats['ci95_lo'] = ci95_lo
print(stats)
输出是
Class Force
0 A1 50
1 A1 150
2 A1 100
3 A2 120
4 A3 140
5 A3 160
------------------------------
mean count std
Class
A1 100 3 50.000000
A2 120 1 NaN
A3 150 2 14.142136
------------------------------
mean count std ci95_hi ci95_lo
Class
A1 100 3 50.000000 156.580326 43.419674
A2 120 1 NaN NaN NaN
A3 150 2 14.142136 169.600000 130.400000
您可以通过利用“sem”(平均值的标准误)来简化@yoonghm 解决方案。
import pandas as pd
import numpy as np
import math
df=pd.DataFrame({'Class': ['A1','A1','A1','A2','A3','A3'],
'Force': [50,150,100,120,140,160] },
columns=['Class', 'Force'])
print(df)
print('-'*30)
stats = df.groupby(['Class'])['Force'].agg(['mean', 'sem'])
print(stats)
print('-'*30)
stats['ci95_hi'] = stats['mean'] + 1.96* stats['sem']
stats['ci95_lo'] = stats['mean'] - 1.96* stats['sem']
print(stats)
正如评论中提到的,我无法重复您的错误,但您可以尝试检查您的数字是否存储为数字而不是字符串。使用
df.info()
并确保相关列是 float 或 int:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 6 entries, 0 to 5
Data columns (total 2 columns):
Class 6 non-null object # <--- non-number column
Force 6 non-null int64 # <--- number (int) column
dtypes: int64(1), object(1)
memory usage: 176.0+ bytes
并不是故意要麻烦您,但 1.96 * sd 公式过于简单化,并且会在较小的样本中得出错误的结论。使用 t 分布代替:
import pandas as pd
import scipy.stats as stats
df=pd.DataFrame({'Class': ['A1','A1','A1','A2','A3','A3'],
'Force': [50,150,100,120,140,160] },
columns=['Class', 'Force'])
print(df)
grouped = df.groupby(['Class'])['Force'].agg(['mean', 'count', 'std'])
# Calculate the t-value for a 95% confidence interval
t_value = stats.t.ppf(0.975, grouped['count'] - 1) # 0.975 corresponds to (1 - alpha/2)
# Calculate the margin of error
me = t_value * grouped['std'] / (grouped['count'] ** 0.5)
# Calculate the lower and upper bounds of the confidence interval
grouped['ci_low'] = grouped['mean'] - me
grouped['ci_high'] = grouped['mean'] + me
print(grouped)
出=
Class Force
0 A1 50
1 A1 150
2 A1 100
3 A2 120
4 A3 140
5 A3 160
mean count std ci_low ci_high
Class
A1 100.0 3 50.000000 -24.206886 224.206886
A2 120.0 1 NaN NaN NaN
A3 150.0 2 14.142136 22.937953 277.062047
(来自chatgpt 3.5的帮助已确认)
我认为 pd.Series.quantile 方法可用于返回这样的置信区间:
confidence_intervals = df.groupby('Class').quantile(q=[0.05, 0.95])
print(confidence_intervals)
输出:
Force
Class
A1 0.05 55.0
0.95 145.0
A2 0.05 120.0
0.95 120.0
A3 0.05 141.0
0.95 159.0