本文介绍了我可以将目标函数和派生函数传递给scipy.optimize.minimize作为一个函数吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用scipy.optimize.minimize来最小化复杂的功能.事后看来,我注意到minimize函数将目标函数和派生函数作为单独的参数.不幸的是,我已经定义了一个函数,该函数将目标函数值和一阶导数值一起返回-因为这两个函数是在for循环中同时计算的.我不认为没有一种好方法可以将我的函数分为两个,而无需程序运行两次相同的for循环.

有没有办法将此组合函数传递给minimize?

(仅供参考,我正在编写一种人工神经网络的反向传播算法,因此for循环用于循环训练数据.目标和导数是同时累积的.)

解决方案

可能有效的方法是:您可以记住该函数,这意味着如果第二次使用相同的输入调用该函数,则它将简单地返回相同的输出对应于这些输入,而无需第二次进行任何实际工作.幕后发生的事情是结果被缓存了.在非线性程序的情况下,可能有成千上万次调用,这意味着需要很大的缓存.通常可以使用备忘录(?)来指定高速缓存限制,并且将对填充进行管理FIFO. IOW您仍然可以完全受益于您的特定情况,因为只有当您需要在同一时间点返回函数值和导数时,输入才是相同的.所以我要说的是,一个小的缓存就足够了.

您没有说您正在使用py2还是py3.在Py 3.2+中,您可以使用 functools.lru_cache 作为装饰器来提供此功能记忆.然后,您可以这样编写代码:

@functools.lru_cache
def original_fn(x):
   blah
   return fnvalue, fnderiv

def new_fn_value(x):
   fnvalue, fnderiv = original_fn(x)
   return fnvalue

def new_fn_deriv(x):
   fnvalue, fnderiv = original_fn(x)
   return fnderiv

然后将每个新功能传递给minimize.由于第二次通话,您仍然会受到处罚,但是如果x不变,它将不起作用.您将需要研究在浮点数的上下文中不变的含义,尤其是因为随着最小化开始收敛,x的变化将逐渐消失. >

如果您四处看看,py2.x中有很多用于记忆的食谱.

我有任何道理吗?

I'm trying to use scipy.optimize.minimize to minimize a complicated function. I noticed in hindsight that the minimize function takes the objective and derivative functions as separate arguments. Unfortunately, I've already defined a function which returns the objective function value and first-derivative values together -- because the two are computed simultaneously in a for loop. I don't think there is a good way to separate my function into two without the program essentially running the same for loop twice.

Is there a way to pass this combined function to minimize?

(FYI, I'm writing an artificial neural network backpropagation algorithm, so the for loop is used to loop over training data. The objective and derivatives are accumulated concurrently.)

解决方案

Something that might work is: you can memoize the function, meaning that if it gets called with the same inputs a second time, it will simply return the same outputs corresponding to those inputs without doing any actual work the second time. What is happening behind the scenes is that the results are getting cached. In the context of a nonlinear program, there could be thousands of calls which implies a large cache. Often with memoizers(?), you can specify a cache limit and the population will be managed FIFO. IOW you still benefit fully for your particular case because the inputs will be the same only when you are needing to return function value and derivative around the same point in time. So what I'm getting at is that a small cache should suffice.

You don't say whether you are using py2 or py3. In Py 3.2+, you can use functools.lru_cache as a decorator to provide this memoization. Then, you write your code like this:

@functools.lru_cache
def original_fn(x):
   blah
   return fnvalue, fnderiv

def new_fn_value(x):
   fnvalue, fnderiv = original_fn(x)
   return fnvalue

def new_fn_deriv(x):
   fnvalue, fnderiv = original_fn(x)
   return fnderiv

Then you pass each of the new functions to minimize. You still have a penalty because of the second call, but it will do no work if x is unchanged. You will need to research what unchanged means in the context of floating point numbers, particularly since the change in x will fall away as the minimization begins to converge.

There are lots of recipes for memoization in py2.x if you look around a bit.

Did I make any sense at all?

这篇关于我可以将目标函数和派生函数传递给scipy.optimize.minimize作为一个函数吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-11 16:47