问题描述
我正在尝试使用具有三个输入变量的 scipy.optimize
最小化函数,其中两个是有界的,一个必须从一组值中选择.为了确保从一组预定义的值中选择第三个变量,我引入了以下约束:
I am trying to minimize a function with scipy.optimize
with three input variables, two of which are bounded and one has to be chosen from a set of values. To ensure that the third variable is chosen from a predefined set of values, I introduced the following constraint:
from scipy.optimize import rosin, shgo
import numpy as np
# Set from which the third variable to be optimized can hold
Z = np.array([-1, -0.8, -0.6, -0.4, -0.2, 0, 0.2, 0.4, 0.6, 0.8, 1])
def Reson_Test(x): # arbitrary objective function
print (x)
return rosen(x)**2 - np.sin(x[0])
def Cond_1(x):
if x[2] in Z:
return 1
else:
return -1
bounds = [(-512,512),]*3
conds = ({'type': 'ineq' , 'fun' : Cond_1})
result = shgo(Rosen_Test, bounds, constraints=conds)
print (result)
然而,当查看 Rosen_Test
的打印结果时,很明显没有强制执行条件 - 可能条件定义不正确?
However, when looking at the print results from Rosen_Test
, it is evident that the condition is not being enforced - perhaps condition is not defined correctly?
我想知道是否有人有任何想法可以确保可以从集合中选择第三个变量.
I was wondering if anyone has any ideas to ensure that the third variable can be chosen from a set.
注意:选择使用 shgo 方法,以便可以引入和更改约束.另外,如果满足此条件,我愿意使用其他优化包
Note: The use of the shgo method was chosen such that constraints can be introduced and can be changed. Also, I am open to use other optimization packages if this condition is met
感谢您的帮助!!
推荐答案
不等式约束不是这样工作的.
The inequality constraints do not work like that.
如docs 它们被定义为
g(x) <= 0
并且你需要像这样编写 g(x)
工作.在你的情况下,情况并非如此.您只返回一个维度的单个标量.您需要返回一个具有三个维度的向量,形状为 (3,)
.
and you need to write g(x)
work like that. In your cases that is not the case. You are only returning a single scalar for one dimension. You need to return a vector with three dimensions, of shape (3,)
.
在您的情况下,您可以尝试改用等式约束,因为这可以允许稍微好一点的破解.但是我仍然不确定它是否会起作用,因为这些优化器不会那样工作.整个过程可能会给优化器留下一个相当颠簸和不连续的目标函数.您可以阅读Mixed-Integer Nonlinear Programming (MINLP),或者开始此处.
In your case you could try to use the equality constraints instead, as this could allow a slightly better hack. But I am still not sure if it will work as these optimizers don't work like that.And the whole thing will probably leave the optimizer with a rather bumpy and discontinuous objective function. You can read on Mixed-Integer Nonlinear Programming (MINLP), maybe start here.
您的方法无法按预期工作还有另外一个原因由于优化器使用浮点数,因此在优化和猜测新解决方案时可能永远不会在您的数组中找到数字.
There are is one more reasons why your approach won't work as expectedAs optimizers work with floating point numbers it will likely never find a number in your array when optimizing and guessing new solutions.
这说明了问题:
import numpy as np
Z = np.array([-1, -0.8, -0.6, -0.4, -0.2, 0, 0.2, 0.4, 0.6, 0.8, 1])
print(0.7999999 in Z) # False, this is what the optimizer will find
print(0.8 in Z) # True, this is what you want
也许您应该尝试以允许对 Z
的整个范围使用不等式约束的方式定义您的问题.
Maybe you should try to define your problem in a way that allows to use an inequality constraint on the whole range of Z
.
但让我们看看它如何工作.
等式约束定义为
h(x) == 0
所以你可以使用
def Cond_1(x):
if x[2] in Z:
return numpy.zeros_like(x)
else:
return numpy.ones_like(x) * 1.0 # maybe multiply with some scalar?
这个想法是返回一个数组 [0.0, 0.0, 0.0]
如果找到数字,它满足等式约束.否则返回 [1.0, 1.0, 1.0]
表示不满足.
The idea is to return an array [0.0, 0.0, 0.0]
that satisfies the equality constraint if the number is found. Else return [1.0, 1.0, 1.0]
to show that it is not satisfied.
注意事项:
1.)您可能需要调整它以返回一个类似 [0.0, 0.0, 1.0]
的数组,以向优化器显示您对哪个维度不满意,以便优化器仅通过调整单个维度就可以做出更好的猜测.
1.)You might have to tune this to return an array like [0.0, 0.0, 1.0]
to show the optimizer which dimension you are unhappy about so the optimizer can make better guesses by only adjusting a single dimension.
2.)您可能必须返回一个大于 1.0 的值来说明不满足的等式约束.这取决于实现.优化器可能认为 1.0 很好,因为它接近 0.0.所以也许你必须尝试一些[0.0, 0.0, 999.0]
.
2.)You might have to return a larger value than 1.0 to state an non-satisfied equality constraint. This depends on the implementation. The optimizer could think that 1.0 is fine as it is close to 0.0. So maybe you have to try something [0.0, 0.0, 999.0]
.
这就解决了维度的问题.但是仍然找不到任何数字对上面提到的浮点数做任何事情.
This solves the problem with the dimension. But still will not find any numbers do to the floating point number thing mentioned above.
但我们可以尝试破解这个
But we can try to hack this like
import numpy as np
Z = np.array([-1, -0.8, -0.6, -0.4, -0.2, 0, 0.2, 0.4, 0.6, 0.8, 1])
def Cond_1(x):
# how close you want to get to the numbers in your array
tolerance = 0.001
delta = np.abs(x[2] - Z)
print(delta)
print(np.min(delta) < tolerance)
if np.min(delta) < tolerance:
return np.zeros_like(x)
else:
# maybe you have to multiply this with some scalar
# I have no clue how it is implemented
# we need a value stating to the optimizer "NOT THIS ONE!!!"
return np.ones_like(x) * 1.0
sol = np.array([0.5123, 0.234, 0.2])
print(Cond_1(sol)) # True
sol = np.array([0.5123, 0.234, 0.202])
print(Cond_1(sol)) # False
这里是一些关于优化的建议.为了确保它以可靠的方式工作,尝试以不同的初始值开始优化.如果与边界一起使用,全局优化算法可能没有初始值.优化器以某种方式离散空间.
Here are some recommendations on optimization. To make sure it works in a reliable way try to start the optimization at different initial values. Global optimization algorithms might not have initial values if used with boundaries. The optimizer somehow discretizes the space.
您可以采取哪些措施来检查优化的可靠性并获得更好的整体结果:
What you could do to check the reliability of your optimization and get better overall results:
对整个区域进行优化
[-512, 512]
(针对所有三个维度)
尝试其中的 1/2:[-512, 0]
和 [0, 512]
(8 个子优化,每个维度 2 个)
Try 1/2 of that: [-512, 0]
and [0, 512]
(8 sub-optimizations, 2 for each dimension)
尝试其中的 1/3:[-512, -171]
、[-171, 170]
、[170, 512]
(27 个子优化,每个维度 3 个)
Try 1/3 of that: [-512, -171]
, [-171, 170]
, [170, 512]
(27 sub-optimizations, 3 for each dimension)
现在比较收敛的结果,看完整的全局优化是否找到相同的结果
Now compare the converged results to see if the complete global optimization found the same result
如果全局优化器没有找到真正的"最小值而是子优化:
If the global optimizer did not find the "real" minima but the sub-optimization:
- 你的目标函数在整个领域都太难了
- 尝试不同的全局优化器
- 调整参数(可能是等式约束的 999)
- 我经常将子优化用作正常过程的一部分,而不仅仅是用于测试.特别是对于黑盒问题.
另请参阅这些答案:
Scipy.optimize 不等式约束 - 哪个不等式的一边被考虑?
这篇关于从预定义的集合中为 scipy.optimize 选择变量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!