使用 GNU Octave GLPK 解决目标函数具有绝对值的优化问题

问题描述 投票:0回答:2

我想最小化:

|0.2126 * R1 + 0.7152 * G1 + 0.0722 * B1 - 7(0.2126 * R2 + 0.7152 * G2 + 0.0722 * B2)- 0.36|

受约束:

0.2126 * R1 + 0.7152 * G1 + 0.0722 * B1 - (0.2126 * R2 + 0.7152 * G2 + 0.0722 * B2) >= 0

到目前为止这是我的代码

ratio = 7;

objective_coefficients  = [ 0.2126,  0.7152,  0.0722, ratio * -0.2126, ratio * -0.7152, ratio * -0.0722, (1 - ratio) * 0.05]';
constraint_coefficients = [ 0.2126,  0.7152,  0.0722,         -0.2126,         -0.7152,         -0.0722];
constraint_right_hand_side_values = [0]';
variable_lower_bounds = [0, 0, 0, 0, 0, 0, 1]';
variable_upper_bounds = [1, 1, 1, 1, 1, 1, 1];
constriant_types = "L";
variable_types = "CCCCCCC";
sense = 1;

parameters.messsage_level = 1;
parameters.iterations_limit = 100;

[xmin, fmin, status, extra] = glpk(objective_coefficients,
                                   constraint_coefficients,
                                   constraint_right_hand_side_values,
                                   variable_lower_bounds,
                                   variable_upper_bounds,
                                   constriant_types,
                                   variable_types,
                                   sense,
                                   parameters);

结果xmin全为1

optimization octave linear-programming
2个回答
1
投票
min abs(x)

可以表示为:

min y
    -y ≤ x ≤ y   (usually: split into two constraints)
    y ≥ 0

或使用变量拆分

 min  xplus+xmin
      xplus-xmin=x
      xplus,xmin ≥ 0
  

有关详细信息,请参阅有关线性规划的任何教科书。


0
投票

实际上,用类似气味的

scipy.optimize.linprog
进行演示,您需要一个目标辅助变量,该变量用系数 1 最小化,所有其他目标系数都为 0。添加约束行,使目标辅助大于绝对参数和绝对负参数;这给你留下了

import numpy as np
from scipy.optimize import linprog

ratio = 7

objective_coefficients = np.array([0, 0, 0, 0, 0, 0, 1])
constraint_coefficients = np.array([
    [-0.2126, -0.7152, -0.0722,        0.2126,         0.7152,        0.0722,  0],  # -ax <= 0
    [ 0.2126,  0.7152,  0.0722, -ratio*0.2126, -ratio *0.7152, -ratio*0.0722, -1],  #  bx + c <= obj :  bx - obj <= -c
    [-0.2126, -0.7152, -0.0722,  ratio*0.2126,  ratio *0.7152,  ratio*0.0722, -1],  # -bx - c <= obj : -bx - obj <=  c
])
constraint_right_hand_side_values = np.array([
    0,
     ratio*0.05 + 0.36,
    -ratio*0.05 - 0.36,
])
variable_lower_bounds = [0, 0, 0, 0, 0, 0,      0]
variable_upper_bounds = [1, 1, 1, 1, 1, 1, np.inf]

result = linprog(
    c=objective_coefficients,
    bounds=np.array((variable_lower_bounds, variable_upper_bounds)).T,
    A_ub=constraint_coefficients,
    b_ub=constraint_right_hand_side_values,
)

R1, G1, B1, R2, G2, B2, obj = result.x
print(result)
print()

print('Confirmation:')
print(0.2126*R1 + 0.7152*G1 + 0.0722 - (0.2126*R2 + 0.7152*G2 + 0.0722*B2), '>= 0')
obj_lhs = result.x[:-1] @ [
    0.2126,  0.7152,  0.0722, ratio*-0.2126, ratio*-0.7152, ratio*-0.0722
] - (ratio*0.05 + 0.36)
print(f'|{obj_lhs}| <= {obj}')
print()
        message: Optimization terminated successfully. (HiGHS Status 7: Optimal)
        success: True
         status: 0
            fun: 0.0
              x: [ 0.000e+00  8.918e-01  1.000e+00  0.000e+00  0.000e+00
                   0.000e+00  0.000e+00]
            nit: 1
          lower:  residual: [ 0.000e+00  8.918e-01  1.000e+00  0.000e+00
                              0.000e+00  0.000e+00  0.000e+00]
                 marginals: [ 0.000e+00  0.000e+00  0.000e+00  0.000e+00
                              0.000e+00  0.000e+00  1.000e+00]
          upper:  residual: [ 1.000e+00  1.082e-01  0.000e+00  1.000e+00
                              1.000e+00  1.000e+00        inf]
                 marginals: [ 0.000e+00  0.000e+00  0.000e+00  0.000e+00
                              0.000e+00  0.000e+00  0.000e+00]
          eqlin:  residual: []
                 marginals: []
        ineqlin:  residual: [ 7.100e-01  0.000e+00  0.000e+00]
                 marginals: [-0.000e+00 -0.000e+00 -0.000e+00]
 mip_node_count: 0
 mip_dual_bound: 0.0
        mip_gap: 0.0

Confirmation:
0.71 >= 0
|0.0| <= 0.0

换句话说,你的目标很可能是“完美的”(等于0),这意味着你可以将问题简化为

import numpy as np
from scipy.optimize import linprog

ratio = 7

objective_coefficients = np.array([0, 0, 0, 0, 0, 0])
ub_coefficients = np.array([
    [-0.2126, -0.7152, -0.0722,        0.2126,         0.7152,        0.0722],  # -ax <= 0
])
eq_coefficients = np.array([
    [ 0.2126,  0.7152,  0.0722, -ratio*0.2126, -ratio *0.7152, -ratio*0.0722],  #  bx + c == 0 :  bx == -c
])
ub_rhs = np.array([0])
eq_rhs = np.array([ratio*0.05 + 0.36])
variable_lower_bounds = [0, 0, 0, 0, 0, 0]
variable_upper_bounds = [1, 1, 1, 1, 1, 1]

result = linprog(
    c=objective_coefficients,
    bounds=np.array((variable_lower_bounds, variable_upper_bounds)).T,
    A_ub=ub_coefficients, b_ub=ub_rhs,
    A_eq=eq_coefficients, b_eq=eq_rhs,
)

R1, G1, B1, R2, G2, B2 = result.x
print(result)
print()

print('Confirmation:')
print(0.2126*R1 + 0.7152*G1 + 0.0722 - (0.2126*R2 + 0.7152*G2 + 0.0722*B2), '>= 0')
obj_lhs = result.x @ [
    0.2126,  0.7152,  0.0722, ratio*-0.2126, ratio*-0.7152, ratio*-0.0722,
] - (ratio*0.05 + 0.36)
print(f'|{obj_lhs}| == 0')
print()
        message: Optimization terminated successfully. (HiGHS Status 7: Optimal)
        success: True
         status: 0
            fun: 0.0
              x: [ 0.000e+00  8.918e-01  1.000e+00  0.000e+00  0.000e+00
                   0.000e+00]
            nit: 0
          lower:  residual: [ 0.000e+00  8.918e-01  1.000e+00  0.000e+00
                              0.000e+00  0.000e+00]
                 marginals: [ 0.000e+00  0.000e+00  0.000e+00  0.000e+00
                              0.000e+00  0.000e+00]
          upper:  residual: [ 1.000e+00  1.082e-01  0.000e+00  1.000e+00
                              1.000e+00  1.000e+00]
                 marginals: [ 0.000e+00  0.000e+00  0.000e+00  0.000e+00
                              0.000e+00  0.000e+00]
          eqlin:  residual: [ 0.000e+00]
                 marginals: [-0.000e+00]
        ineqlin:  residual: [ 7.100e-01]
                 marginals: [-0.000e+00]
 mip_node_count: 0
 mip_dual_bound: 0.0
        mip_gap: 0.0

Confirmation:
0.71 >= 0
|0.0| == 0
© www.soinside.com 2019 - 2024. All rights reserved.