利用最小二乘法scipy优化两个耦合方程的拟合参数

问题描述 投票:0回答:1

我想拟合这 4 个参数

K0 = np.array([K11, K12, K21, K22]).

来自耦合方程 f(x, parameters) 和 g(x, parameters)

f = rac{(K11**x + 2**K11*K12*x*x)}{(2(1 + K11*x + K11*K12**x*x))}

g = (K21**x + 2*K21*K22*x*x) / (2*(1 + K21**x + K21K22**x*x))

我们可以认为这些方程是标准化的,并且具有类似滞后的 S 形线形状,其中上臂曲线由 g 描述,下臂曲线由 f 描述。

  • f 是拟合 Y1exp 实验点的函数

  • g 是拟合 Y2exp 实验点的函数

这个想法是获取拟合的 K0 参数以及计算出的误差,如粘贴在此处的“bootstrap”部分中的here

errfunc = lambda p, x, y: function(x,p) - y

# Fit first time
pfit, perr = optimize.leastsq(errfunc, p0, args=(datax, datay), full_output=0)


# Get the stdev of the residuals
residuals = errfunc(pfit, datax, datay)
sigma_res = np.std(residuals)

sigma_err_total = np.sqrt(sigma_res**2 + yerr_systematic**2)

# 100 random data sets are generated and fitted
ps = []
for i in range(100):

    randomDelta = np.random.normal(0., sigma_err_total, len(datay))
    randomdataY = datay + randomDelta

    randomfit, randomcov = \
        optimize.leastsq(errfunc, p0, args=(datax, randomdataY),\
                         full_output=0)

    ps.append(randomfit) 

ps = np.array(ps)
mean_pfit = np.mean(ps,0)

# You can choose the confidence interval that you want for your
# parameter estimates: 
Nsigma = 1. # 1sigma gets approximately the same as methods above
            # 1sigma corresponds to 68.3% confidence interval
            # 2sigma corresponds to 95.44% confidence interval
err_pfit = Nsigma * np.std(ps,0) 

pfit_bootstrap = mean_pfit
perr_bootstrap = err_pfit

where pfit_bootstrap are the fitted parameters and perr_boostrap are the error at 1 sigma for each parameter.

由于两者是耦合的,我尝试了here的方法。

我将 errfunct 定义为“foo”,以便最小化拟合点与实验点之间的平方差。

foo = np.square(Y1exp - f).sum() + np.square(Y2exp - g).sum()

但是当逐步运行每一行时,在以下一行中:

pfit, perr = optimization.leastsq(foo, K0, args=(L, Y1exp, Y2exp))

我收到以下错误:

Traceback (most recent call last)
Cell In[47], line 24
     20 function = np.square(Y1exp - f).sum() + np.square(Y2exp - g).sum()  
     22 print(function)
---> 24 pfit = optimize.leastsq(foo, K0, args=(L, Y1exp, Y2exp))

File ~/anaconda3/envs/TP/lib/python3.9/site-packages/scipy/optimize/_minpack_py.py:415, in leastsq(func, x0, args, Dfun, full_output, col_deriv, ftol, xtol, gtol, maxfev, epsfcn, factor, diag)
    413 if not isinstance(args, tuple):
    414     args = (args,)
--> 415 shape, dtype = _check_func('leastsq', 'func', func, x0, args, n)
    416 m = shape[0]
    418 if n > m:

File ~/anaconda3/envs/TP/lib/python3.9/site-packages/scipy/optimize/_minpack_py.py:25, in _check_func(checker, argname, thefunc, x0, args, numinputs, output_shape)
     23 def _check_func(checker, argname, thefunc, x0, args, numinputs,
     24                 output_shape=None):
---> 25     res = atleast_1d(thefunc(*((x0[:numinputs],) + args)))
     26     if (output_shape is not None) and (shape(res) != output_shape):
     27         if (output_shape[0] != 1):

TypeError: 'numpy.float64' object is not callable

有什么建议可以解决问题并使用此方法将拟合的 K 参数与其相关误差拟合在一起吗?

python optimization scipy-optimize least-squares function-fitting
1个回答
1
投票

leastsq 的第一个参数需要是一个可调用函数,它接收参数数组以及您传递的任何参数。在您的示例中 foo 是 np.sum() 的结果而不是可调用的。遵循你的例子有点困难,但这可能是一个起点

而不是这个:

foo = np.square(Y1exp - f).sum() + np.square(Y2exp - g).sum()


尝试这样的事情:

def foo(K, Y1exp, Y2Exp): resid = np.square(Y1exp - f(*K)).sum() + np.square(Y1exp - g(*K)).sum() return resid
假设 f 和 g 在其他地方定义为函数。

© www.soinside.com 2019 - 2024. All rights reserved.