我有以下问题,我试图用线性规划来解决。
给定一组n维点L和给定的搜索空间S,问题如下:
为了将其建模为线性规划问题,我有
首先,作为
但是,当尝试遵循半定规划中的绝对值总和约束中的解释来表示绝对值的总和时,我写了这个(使用 PuLP 库在 python 中实现),但没有设法获得实际的最优值解决方案:
# Define the dimensionality of the problem
n = len(model) #dimension of the space
m = len(L) #number of points in L
# Create the LP problem
prob = plp.LpProblem("Maximize minimal manhattan distance", plp.LpMaximize)
# Define the decision variables
x = plp.LpVariable.dicts("x", range(n), cat = 'Continuous')
#Define the variable to maximize
z = plp.LpVariable("z", lowBound=0, cat = 'Continuous')
print(x, z)
# Define the constraints for x, i.e. the search space constraints
for i in range(n):
if i == 0:
prob += x[i] >= model[i][0]
prob += x[i] <= model[i][1]
else:
prob += x[i] >= model[i][0] + x[i-1]
prob += x[i] <= model[i][1] + x[i-1]
# Define the constraints for the absolute value of the difference between x and each point in L
for j in range(m): #for every sigma in L
#prob += z <= plp.lpSum([abs(x[i] - L[j][i]) for i in range(n)]) #this is how it could be if abs() was acceptable
diff = plp.LpVariable.dicts("diff_" + str(j), range(n), cat = 'Continuous')
diff_plus = plp.LpVariable.dicts("diff_plus" + str(j), range(n), cat = 'Continuous')
for i in range(n):
#print(x[i])
prob += diff[i] == L[j][i] - x[i]
prob += -diff_plus[i] <= diff[i] <= diff_plus[i]
prob += z == plp.lpSum([diff_plus[i] for i in range(n)])
# Define the objective function
prob += z
# Solve the problem
prob.solve()
任何人都可以帮助我理解我在重写约束来表示绝对值之和时哪里做错了?我尝试了各种方法!谢谢:)
要形成绝对值,请使用以下内容:
dplus[i] - dmin[i] = sigma[i]-x[i]
dplus[i] <= M*b[i]
dmin[i] <= M*(1-b[i])
max sum(dplus[i]+dmin[i])
dplus,dmin: positive variables
b : binary variable
M = 999 (say)