从Matlab / fmincon到SciPy /最小化

问题描述 投票:1回答:1

我正在尝试将最小化过程从Matlab转换为Python / SciPy。

下面,主要的Matlab脚本:

d = 0.5;
n = 7;
l = n * (n - 1) * d;

ko = ones(n,1) .* d.* n;
ki = ones(n,1) .* d.* n;
so = [7 5 3 1 3 0 1];
si = [4 5 5 0 0 2 4];

x0 = [ko / sqrt(l); ki / sqrt(l); 0.9 * ones(n,1); 0.9 * ones(n,1)];
lb = zeros(4*n,1);
ub = [ones(2*n,1)*Inf;ones(2*n,1)+0.1];
options = optimset('Display','off','Algorithm','interior-point','GradObj','off','DerivativeCheck','off','MaxFunEvals',10^5,'MaxIter',10^3,'TolX',10^(-32),'TolFun',10^(-32));

res = fmincon(@(x)obj_fun(x,ko,ki,so,si,n),x0,[],[],[],[],lb,ub,[],options);

目标函数定义如下:

function G = obj_fun(m,ko,ki,so,si,n)

    xo = m(1:n);
    xi = m(n+1:2*n);
    yo = m(2*n+1:3*n);
    yi = m(3*n+1:4*n);

    g = 0;

    for i = 1:n
        g = g + (ko(i) * log(xo(i))) + (ki(i) * log(xi(i))) + (so(i) * log(yo(i))) + (si(i) * log(yi(i)));
    end

    f = 0;

    for i = 1:n
        for j = 1:n
            if i ~= j
                f = f + log(1 - yo(i) * yi(j)) - log(1 - (yo(i) * yi(j)) + (xo(i) * xi(j) * yo(i) * yi(j)));
            end
        end
    end

    g = -(g + f);

end

一切在Matlab中运行顺利,最小化过程在几秒钟内完成,提供了以下结果:

res = [
   5018.1,  6131.5,   10276,    21353,   8579.3,    35118,    26854,
   7994.8,  7385.6,  8907.3,    35322,    35236,    13084,   7167.6,
   1.0882, 0.94876, 0.00619,   0.4943, 3.67e-06, 3.11e-11, 2.20e-09,
   0.5144, 0.36119, 0.27945, 1.06e-12, 5.97e-12, 1.59e-08, 9.07e-05
];

[这是我尝试使用SciPy优化工具,特别是minimize函数在Python中复制此框架:

def obj_fun(x, ko, ki, so, si, n):

    split = _np.split(x, 4)
    xo = split[0]
    xi = split[1]
    yo = split[2]
    yi = split[3]

    g = np.zeros(1)

    for i in range(n):
        g += (ko[i] * np.log(xo[i])) + (ki[i] * np.log(xi[i])) + (so[i] * np.log(yo[i])) + (si[i] * np.log(yi[i]))

    f = _np.zeros(1)

    for i in range(n):
        for j in range(n):
            if i != j:
                f += np.log(1.0 - (yo[i] * yi[j])) - np.log(1.0 - (yo[i] * yi[j]) + (xo[i] * xi[j] * yo[i] * yi[j]))

    sol = -(g + f)
    jac = np.zeros(n * 4)

    return sol, jac

d = 0.5
n = 7
l = n * (n - 1) * d

ki = np.ones(n) * d * n
ko = np.ones(n) * d * n
si = np.array([[7, 5, 3, 1, 3, 0, 1]])
so = np.array([[4, 5, 5, 0, 0, 2, 4]])

x0 = np.concatenate((ko / np.sqrt(l), ki / np.sqrt(l), 0.9 * np.ones(n), 0.9 * np.ones(n)), axis=None)
bounds = [(0.0, None) if not np.isfinite(b) else (0.0, b) for b in np.repeat(_np.array([np.inf, 1.1]), n * 2)]

res = spo.minimize(lambda x: obj_fun(x, ko, ki, so, si, n), x0, method='trust-constr', jac=True, bounds=bounds)

您可能已经注意到,由于不知道渐变,我从目标函数返回了一个空的雅各布矩阵。如果我没有退回,则该过程将永远失败,并且会在控制台中显示以下消息:

RuntimeWarning: invalid value encountered in less if reduction_ratio < SUFFICIENT_REDUCTION_RATIO and \

RuntimeWarning: invalid value encountered in greater_equal if reduction_ratio >= LARGE_REDUCTION_RATIO:

RuntimeWarning: invalid value encountered in greater_equal elif reduction_ratio >= INTERMEDIARY_REDUCTION_RATIO:

RuntimeWarning: invalid value encountered in less elif reduction_ratio < SUFFICIENT_REDUCTION_RATIO:

RuntimeWarning: invalid value encountered in less_equal return (lb <= x).all() and (x <= ub).all()

这是上面的代码建议的解决方案,与MATLAB找到的解决方案相去甚远:

x: array([0.95283666, 0.95283666, 0.95283666, 0.95283666, 0.95283666,
   0.95283666, 0.95283666, 0.95283666, 0.95283666, 0.95283666,
   0.95283666, 0.95283666, 0.95283666, 0.95283666, 0.7914069 ,
   0.7914069 , 0.7914069 , 0.7914069 , 0.7914069 , 0.7914069 ,
   0.7914069 , 0.7914069 , 0.7914069 , 0.7914069 , 0.7914069 ,
   0.7914069 , 0.7914069 , 0.7914069 ])

我尝试了所有可用的其他有界最小化算法:L-BFGS-BSLSQP。它们无法收敛到解决方案,并且发出了很多警告,可能与传递给日志函数和除法的NaN或Inf值有关:

RuntimeWarning: divide by zero encountered in log
RuntimeWarning: invalid value encountered in double_scalars
RuntimeWarning: invalid value encountered in log
...
python matlab scipy minimization scipy-optimize
1个回答
© www.soinside.com 2019 - 2024. All rights reserved.