本文介绍了告诉scipy.optimize.minimize失败的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用scipy.optimize.minimize对目标函数进行无限制的优化,该目标函数接收几个参数并基于这些参数运行复杂的数值模拟.在某些情况下,在某些情况下,我使目标函数返回inf,在这种情况下,该模拟并不总是收敛.

I'm using scipy.optimize.minimize for unrestricted optimization of an objective function which receives a couple of parameters and runs a complex numerical simulation based on these parameters. This simulation does not always converge in which case I make the objective function return inf, in some cases, in others NaN.

我认为,这种破解将阻止最小化收敛于使模拟发生分歧的一组参数附近的任何地方.取而代之的是,我遇到了这样一种情况,即模拟甚至不会针对初始参数集收敛,但是优化没有失败,而是通过0次迭代成功"终止.似乎并不关心目标函数返回inf.

I thought that this hack would prevent the minimization from converging anywhere near a set of parameters that makes the simulation diverge. Instead, I encountered a case where the simulation won't even converge for the starting set of parameters but instead of failing, the optimization terminates "successfully" with 0 iterations. It doesn't seem to care about the objective function returning inf.

有没有办法告诉scipy.optimize.minimize失败,例如通过提出某种例外.尽管在这种情况下,很明显优化没有成功终止-由于存在0次迭代,而且我知道最优结果-在某个时候我想运行一些我不知道解决方案的问题,因此我需要依靠最小化来告诉我是否被狗屎击中了风扇.如果返回大量的nans和infs不会破坏"算法,我想我将不得不用蛮力来做.

Is there a way to tell scipy.optimize.minimize to fail, e.g. by raising some sort of exception. While in this case it's obvious that the optimization didn't terminate successfully - because of 0 iterations and the fact that I know the optimal result - at some point I want to run problems that I don't know the solution for and I need to rely on minimize to tell me if shit hit the fan. If returning lots of nans and infs doesn't "break" the algorithm I guess I'll have to do it by brute force.

这里是几乎重复的外观的一个示例.函数-两个变量的函数-总共被调用4次:
1)(在起点处)->模拟发散,f(x)= inf
2)在右侧的1e-5点处(梯度近似)->模拟发散,f(x)= inf
3):在点1e-5处较高(等级近似)->模拟收敛,f(x)=某个有限值
4)再次从起点开始->模拟发生分歧,f(x)= inf

Here is an example of what the almost-iteration looks like.The function - a function of two variables - is called 4 times over all:
1) at the starting point -> simulation diverges, f(x) = inf
2) at a point 1e-5 to the right (gradient approximation) -> simulation diverges, f(x) = inf
3) at a point 1e-5 higher (grad. appr.) -> simulation converges, f(x) = some finite value
4) once more at the starting point -> simulation diverges, f(x) = inf

推荐答案

您有2个可以想到的选择:

You have 2 options I can think of:

  • 选择约束优化
  • 修改您的目标函数以在数值模拟不收敛时发散.从根本上讲,这意味着返回一个较大的值,与正常"值相比,返回的值较大,这取决于您手头的问题. minimize然后将尝试优化朝着另一个方向前进
  • opt for constrained optimization
  • modify your objective function to diverge whenever your numerical simulation does not converge. Basically this means returning a large value, large compared to a 'normal' value, which depends on your problem at hand. minimize will then try to optimize going in another direction

但是,对于minimize不能将inf理解为较大的值,并且没有尝试寻找另一个方向的解决方案,我感到有些惊讶.仅当您的目标函数返回nan时,它才会以0次迭代返回吗?您可以尝试通过在目标函数中return语句之前打印值来尝试调试问题.

I am however a bit surprised that minimize does not understand inf as a large value, and does not try to look for a solution in another direction. Could it be that it returns with 0 iterations only when your objective function returns nan? You could try debugging the issue by printing the value just before the return statement in your objective function.

这篇关于告诉scipy.optimize.minimize失败的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-11 16:43