我正在使用 Python 使用神经数据优化一些参数。
我使用了 scipy.optimize.minimize(BFGS 方法)。
就是损失太大而无法减少的问题。 像那些。
21395.738099992504 21395.738127943157 21395.738115877684 21395.738162622994 21395.738115161068 21395.738162367332 21395.738110567938 21395.73813130535 21395.738097769616 21395.738104114924 21395.73813444559
这是我的代码。
def fit_lnp(X, y, opts):
"""
Fits an LNP model to the data.
Args:
X (numpy.ndarray): Input data matrix.
y (numpy.ndarray): Output data vector.
opts (dict): Options for the optimizer.
Returns:
tuple: Tuple containing the optimized parameters and predicted values.
"""
init = 1e-3 * np.random.randn(X.shape[0], 1).flatten()
result = minimize(
lambda param: do_lnp(param, X.T, y.T), init, method="BFGS", options=opts
)
param = result.x
yHat = np.exp(np.dot(X.T, param)).T
return param, yHat
def do_lnp(param, X, y):
"""
Computes the objective function, its gradient, and the Hessian for the LNP model.
Args:
param (numpy.ndarray): Parameter array.
X (numpy.ndarray): Input data matrix.
y (numpy.ndarray): Output data vector.
Returns:
tuple: Tuple containing the objective function value, its gradient, and the Hessian matrix.
"""
lamda = 1e-2
# compute the firing rate
u = np.dot(X, param.reshape(-1, 1))
rate = np.exp(u)
# start computing the Hessian
rX = rate[:, None] * X # equivalent to MATLAB's bsxfun(@times, rate, X)
hessian_glm = np.dot(rX.T, X)
# regularize term
reg_val = (lamda / 2) * np.sum(np.exp(param) ** 2)
# compute f, the gradient, and the hessian
f = np.sum(rate - y * u) + reg_val
print(f)
df = np.dot(X.T, (rate - y))
hessian = hessian_glm
return f
我应该改变最小化的方法吗? 我怎样才能找到改进这种优化的方法?
我需要一些帮助。 谢谢你。
照原样,函数
do_lnp
计算函数的值、其梯度和粗麻布......但仅返回函数:
return f
应替换为
return f, df, hessian
和需要在最小化调用中指定
jac=True, hess=True
。
或者,hessian 和 jacobian 可以作为单独的函数传递。
scipy.optimize.minimize
一些一般提示: