scipy优化中缺乏收敛 - 最小化最小化

问题描述 投票:2回答:1

总的来说,我试图“缩放”一个数组,使得数组的积分为1,即数组元素的总和除以元素的数量为1.然而,这种缩放必须通过改变参数alpha和而不是简单地将数组乘以比例因子。为此,我使用scipy-optimize-minimize。问题是代码运行,输出“优化已成功终止”,但显示的当前函数值不为0,因此显然优化实际上并不成功。

This is a screenshot from the paper that defines the equation.

import numpy as np
from scipy.optimize import minimize

# just defining some parameters
N = 100
g = np.ones(N)
eta = np.array([i/100 for i in range(N)])
g_at_one = 0.01   

def my_minimization_func(alpha):
    g[:] = alpha*(1+(1-g_at_one/alpha)*np.exp((eta[:]-eta[N-1])/2)*(1/np.sqrt(3)*np.sin(np.sqrt(3)/2*(eta[:] - eta[N-1])) - np.cos(np.sqrt(3)/2*(eta[:] - eta[N-1])))) 
    to_be_minimized = np.sum(g[:])/N - 1
    return to_be_minimized

result_of_minimization = minimize(my_minimization_func, 0.1, options={'gtol': 1e-8, 'disp': True})
alpha_at_min = result_of_minimization.x
print(alpha_at_min)
python optimization scipy
1个回答
0
投票

嗯,我不清楚,为什么你使用最小化这样的问题?您可以简单地对矩阵进行标准化,然后使用规范化矩阵和旧矩阵计算alpha。对于矩阵归一化,请查看here

在你的代码中,你的目标函数包括零(1-g_at_one/alpha)的除法,所以函数没有在0中定义,这就是为什么我假设scipy正在跳跃它。

编辑:所以我只是重新制定你的问题并使用约束,添加一些打印以获得更好的可视化。我希望这有帮助:

import numpy as np
from scipy.optimize import minimize

# just defining some parameters
N   = 100
g   = np.ones(N)
eta = np.array([i/100 for i in range(N)])
g_at_one = 0.01   

def f(alpha):
    g = alpha*(1+(1-g_at_one/alpha)*np.exp((eta[:]-eta[N-1])/2)*(1/np.sqrt(3)*np.sin(np.sqrt(3)/2*(eta[:] - eta[N-1])) - np.cos(np.sqrt(3)/2*(eta[:] - eta[N-1])))) 
    to_be_minimized = np.sum(g[:])/N
    print("+ For alpha: %7s => f(alpha): %7s" % ( round(alpha[0],3),
                                                  round(to_be_minimized,3) ))
    return to_be_minimized

cons = {'type': 'ineq', 'fun': lambda alpha:  f(alpha) - 1}
result_of_minimization = minimize(f, 
                                  x0 = 0.1,
                                  constraints = cons,
                                  tol = 1e-8,
                                  options = {'disp': True})

alpha_at_min = result_of_minimization.x

# verify 
print("\nAlpha at min: ", alpha_at_min[0])
alpha = alpha_at_min
g = alpha*(1+(1-g_at_one/alpha)*np.exp((eta[:]-eta[N-1])/2)*(1/np.sqrt(3)*np.sin(np.sqrt(3)/2*(eta[:] - eta[N-1])) - np.cos(np.sqrt(3)/2*(eta[:] - eta[N-1])))) 
print("Verification: ", round(np.sum(g[:])/N - 1) == 0)

输出:

+ For alpha:     0.1 => f(alpha):   0.021
+ For alpha:     0.1 => f(alpha):   0.021
+ For alpha:     0.1 => f(alpha):   0.021
+ For alpha:     0.1 => f(alpha):   0.021
+ For alpha:     0.1 => f(alpha):   0.021
+ For alpha:     0.1 => f(alpha):   0.021
+ For alpha:     0.1 => f(alpha):   0.021
+ For alpha:   7.962 => f(alpha):     1.0
+ For alpha:   7.962 => f(alpha):     1.0
+ For alpha:   7.962 => f(alpha):     1.0
+ For alpha:   7.962 => f(alpha):     1.0
+ For alpha:   7.962 => f(alpha):     1.0
+ For alpha:   7.962 => f(alpha):     1.0
+ For alpha:   7.962 => f(alpha):     1.0
+ For alpha:   7.962 => f(alpha):     1.0
+ For alpha:   7.962 => f(alpha):     1.0
+ For alpha:   7.962 => f(alpha):     1.0
+ For alpha:   7.962 => f(alpha):     1.0
+ For alpha:   7.962 => f(alpha):     1.0
Optimization terminated successfully.    (Exit mode 0)
            Current function value: 1.0000000000000004
            Iterations: 3
            Function evaluations: 9
            Gradient evaluations: 3

Alpha at min:  7.9620687892224264
Verification:  True
© www.soinside.com 2019 - 2024. All rights reserved.