使用 statsmodel 估计和 scikit-learn 交叉验证,可能吗?

问题描述 投票:0回答:4

我正在寻找一种方法,可以使用从 python statsmodel 获得的

fit
对象(结果)来输入 scikit-learn cross_validation 方法的
cross_val_score

所附链接表明这可能是可能的,但我没有成功。

我收到以下错误

estimator should a be an estimator implementing 'fit' method
statsmodels.discrete.discrete_model.BinaryResultsWrapper object at 
0x7fa6e801c590 was passed

参考此链接

python scikit-learn cross-validation statsmodels
4个回答
50
投票

确实,您不能直接在

cross_val_score
对象上使用
statsmodels
,因为接口不同:在 statsmodels

  • 训练数据直接传递到构造函数中
  • 一个单独的对象包含模型估计的结果

但是,您可以编写一个简单的包装器来使

statsmodels
对象看起来像
sklearn
估计器:

import statsmodels.api as sm
from sklearn.base import BaseEstimator, RegressorMixin

class SMWrapper(BaseEstimator, RegressorMixin):
    """ A universal sklearn-style wrapper for statsmodels regressors """
    def __init__(self, model_class, fit_intercept=True):
        self.model_class = model_class
        self.fit_intercept = fit_intercept
    def fit(self, X, y):
        if self.fit_intercept:
            X = sm.add_constant(X)
        self.model_ = self.model_class(y, X)
        self.results_ = self.model_.fit()
        return self
    def predict(self, X):
        if self.fit_intercept:
            X = sm.add_constant(X)
        return self.results_.predict(X)

此类包含正确的

fit
predict
方法,并且可以与
sklearn
一起使用,例如交叉验证或包含到管道中。就像这里:

from sklearn.datasets import make_regression
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LinearRegression

X, y = make_regression(random_state=1, n_samples=300, noise=100)

print(cross_val_score(SMWrapper(sm.OLS), X, y, scoring='r2'))
print(cross_val_score(LinearRegression(), X, y, scoring='r2'))

您可以看到两个模型的输出是相同的,因为它们都是 OLS 模型,以相同的方式进行交叉验证。

[0.28592315 0.37367557 0.47972639]
[0.28592315 0.37367557 0.47972639]

13
投票

按照 David 的建议(这给了我一个错误,抱怨缺少函数 get_parameters

)和 
scikit 学习文档,我为线性回归创建了以下包装器。 它具有与 sklearn.linear_model.LinearRegression
 相同的界面,但此外还具有 
summary()
 功能,它提供有关 p 值、R2 和其他统计信息的信息,如 
statsmodels.OLS
 中所示。

import statsmodels.api as sm from sklearn.base import BaseEstimator, RegressorMixin import pandas as pd import numpy as np from sklearn.utils.multiclass import check_classification_targets from sklearn.utils.validation import check_X_y, check_is_fitted, check_array from sklearn.utils.multiclass import unique_labels from sklearn.utils.estimator_checks import check_estimator class MyLinearRegression(BaseEstimator, RegressorMixin): def __init__(self, fit_intercept=True): self.fit_intercept = fit_intercept """ Parameters ------------ column_names: list It is an optional value, such that this class knows what is the name of the feature to associate to each column of X. This is useful if you use the method summary(), so that it can show the feature name for each coefficient """ def fit(self, X, y, column_names=() ): if self.fit_intercept: X = sm.add_constant(X) # Check that X and y have correct shape X, y = check_X_y(X, y) self.X_ = X self.y_ = y if len(column_names) != 0: cols = column_names.copy() cols = list(cols) X = pd.DataFrame(X) cols = column_names.copy() cols.insert(0,'intercept') print('X ', X) X.columns = cols self.model_ = sm.OLS(y, X) self.results_ = self.model_.fit() return self def predict(self, X): # Check is fit had been called check_is_fitted(self, 'model_') # Input validation X = check_array(X) if self.fit_intercept: X = sm.add_constant(X) return self.results_.predict(X) def get_params(self, deep = False): return {'fit_intercept':self.fit_intercept} def summary(self): print(self.results_.summary() )

使用示例:

cols = ['feature1','feature2'] X_train = df_train[cols].values X_test = df_test[cols].values y_train = df_train['label'] y_test = df_test['label'] model = MyLinearRegression() model.fit(X_train, y_train) model.summary() model.predict(X_test)

如果你想显示列名,可以调用

model.fit(X_train, y_train, column_names=cols)

在交叉验证中使用它:

from sklearn.model_selection import cross_val_score scores = cross_val_score(MyLinearRegression(), X_train, y_train, cv=10, scoring='neg_mean_squared_error') scores
    

6
投票
仅供参考,如果您使用

statsmodels

公式API和/或使用
fit_regularized
方法,您可以通过这种方式修改@David Dale的包装类。

import pandas as pd from sklearn.base import BaseEstimator, RegressorMixin from statsmodels.formula.api import glm as glm_sm # This is an example wrapper for statsmodels GLM class SMWrapper(BaseEstimator, RegressorMixin): def __init__(self, family, formula, alpha, L1_wt): self.family = family self.formula = formula self.alpha = alpha self.L1_wt = L1_wt self.model = None self.result = None def fit(self, X, y): data = pd.concat([pd.DataFrame(X), pd.Series(y)], axis=1) data.columns = X.columns.tolist() + ['y'] self.model = glm_sm(self.formula, data, family=self.family) self.result = self.model.fit_regularized(alpha=self.alpha, L1_wt=self.L1_wt, refit=True) return self.result def predict(self, X): return self.result.predict(X)
    

-1
投票
虽然我认为这在技术上不是 scikit-learn,但有一个包

pmdarima(链接到 PyPi 上的 pmdarima 包),它包装了 statsmodel 并提供了类似 scikit-learn 的接口。

© www.soinside.com 2019 - 2024. All rights reserved.