如何指定内核超参数在 scikit-learn GP 内核中始终固定?

问题描述 投票:0回答:1

我正在尝试在 scikit-learn 中实现以下简单的自定义 GP 内核:

class uniform(Kernel):
    def __init__(self, M):
        # Initialize the parameters of your custom kernel
        self.M = M
        # Call the superclass constructor
        super().__init__()
        
    @property
    def hyperparameter_M(self):
        return Hyperparameter("M", "numeric", "fixed")
    
    def __call__(self, X, Y=None, eval_gradient=False):
        # The gradient of the kernel k(X, X) with respect to the log of the
        # hyperparameter of the kernel. Only returned when `eval_gradient`
        # is True.

        if Y is None:
            Y = X
        X = np.atleast_2d(X)
        Y = np.atleast_2d(Y)
        D = np.shape(X)[1]
        covariance_matrix = 1
        w = np.arange(1,self.M+1,1)
        for d in range(D):
            a = feature(X[:,d],self.M).eval()
            b = feature(Y[:,d],self.M).eval()
            covariance_matrix_d = np.inner(a, b)
            covariance_matrix *= covariance_matrix_d

        if eval_gradient:
            # grad_matrix = np.zeros(covariance_matrix.shape)
            return covariance_matrix, None
        else:
            return covariance_matrix

    def diag(self, X):
        return np.diag(self.__call__(X, Y=X, eval_gradient=False))
    
    def is_stationary(self):
        return True

问题是,当用这个内核拟合 GP 时,梯度是必需的,尽管没有不固定的超参数(uniform.theta 是一个空列表)。像共享代码中那样返回 None 并不是解决方案。我是不是做错了什么?

python scikit-learn gaussian-process
1个回答
0
投票

您将 超参数

M
定义为固定,但您仍然在
__call__
方法中为其提供渐变。由于
M
是固定的,因此相对于
M
的梯度将为 0 🤯.

以下是您如何修改

__call__
以使其发挥作用:

def __call__(self, X, Y=None, eval_gradient=False):
    if Y is None:
        Y = X
    X = np.atleast_2d(X)
    Y = np.atleast_2d(Y)
    D = np.shape(X)[1]
    covariance_matrix = 1
    w = np.arange(1,self.M+1,1)
    for d in range(D)
        a = feature(X[:,d],self.M).eval()
        b = feature(Y[:,d],self.M).eval()
        covariance_matrix_d = np.inner(a, b)
        covariance_matrix *= covariance_matrix_d
    if eval_gradient:
        grad_matrix = np.zeros_like(covariance_matrix)
        return covariance_matrix, grad_matrix
    else:
        return covariance_matrix
© www.soinside.com 2019 - 2024. All rights reserved.