本文介绍了如何在scipy.optimize函数上强制执行更大的步骤?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个函数compare_images(k, a, b),用于比较两个2d数组ab

在函数中,我将gaussian_filtersigma=k应用于a.我的想法是估计要平滑图像a以便使其类似于图像b的量 >

问题是我的函数compare_images仅在k变化超过0.5时才会返回不同的值,而如果我执行fmin(compare_images, init_guess, (a, b),则通常会卡在init_guess值上.

我认为问题是fmin(和minimize)倾向于从很小的步骤开始,在我的情况下,它将为compare_images生成完全相同的返回值,因此该方法认为它已经找到了一个最低限度.它只会尝试几次.

是否有办法强制fminscipy中的任何其他最小化功能采取更大的步骤?还是有什么方法更适合我的需求?

我找到了一个临时解决方案.首先,按照建议,我使用xtol=0.5及更高版本作为fmin的参数.即使那样,我仍然有一些问题,几次fmin会返回init_guess.然后,我创建了一个简单的循环,以便在fmin == init_guess时生成另一个随机的init_guess并重试.

这当然很慢,但是现在我可以运行它了.对于我的所有数据,将需要20小时左右的时间来运行它,但是我不需要再次执行此操作.

无论如何,要为仍然有兴趣寻求更好解决方案的人们更好地解释该问题:

  • 我有2张图像,分别是AB,其中包含一些科学数据.
  • A看起来像几个具有可变值的点(这是一个矩阵,其中每个值的点代表事件发生的位置及其强度)
  • B看起来像一个平滑的热图(它是观察到的出现密度)
  • B看起来就像您对A应用了高斯滤波器并带有一点半随机噪声一样.
  • 我们通过应用常量sigmaA的高斯滤波器来逼近B. sigma是从视觉上选择的,但仅适用于特定类别的图像.
  • 我正在尝试为每个图像获取最佳的sigma,因此稍后我可以找到sigma与每个图像中显示的事件类别的某些关系.

无论如何,谢谢您的帮助!

解决方案

快速检查:您可能真的是fmin(compare_images, init_guess, (a,b))吗?

如果gaussian_filter的行为与您所说的一样,则您的函数是分段常量的,这意味着依赖于导数(即大多数导数)的优化器已经淘汰.您可以尝试使用全局优化器,例如退火,或在k的合理范围内进行蛮力搜索.

但是,正如您所描述的,通常,如果ba的平滑版本,则只有明确的全局最小值compare_images.如果要确定使两个图像最相似的a平滑量,则您的方法很有意义.

如果问题是图像有多相似",那么我认为应该进行逐像素比较(也许要进行一些平滑处理).根据我们正在谈论的图像,可能需要首先对齐图像(例如,用于比较照片).请说明:-)

编辑:另一个可能有用的想法:重写compare_images,以便它计算平滑的a的两个版本-一个使用sigma = floor(k)的版本和一个使用ceil(k)的版本(即,舍入k到下一个较低/较高的int).然后计算a_smooth = a_floor*(1-kfrac)+a_ceil*kfrac,其中kfrack的小数部分.这样,比较功能通过k变为连续.

祝你好运!

I have a function compare_images(k, a, b) that compares two 2d-arrays a and b

Inside the funcion, I apply a gaussian_filter with sigma=k to a My idea is to estimate how much I must to smooth image a in order for it to be similar to image b

The problem is my function compare_images will only return different values if k variation is over 0.5, and if I do fmin(compare_images, init_guess, (a, b) it usually get stuck to the init_guess value.

I believe the problem is fmin (and minimize) tends to start with very small steps, which in my case will reproduce the exact same return value for compare_images, and so the method thinks it already found a minimum. It will only try a couple times.

Is there a way to force fmin or any other minimizing function from scipy to take larger steps? Or is there any method better suited for my need?

EDIT:I found a temporary solution.First, as recommended, I used xtol=0.5 and higher as an argument to fmin.Even then, I still had some problems, and a few times fmin would return init_guess.I then created a simple loop so that if fmin == init_guess, I would generate another, random init_guess and try it again.

It's pretty slow, of course, but now I got it to run. It will take 20h or so to run it for all my data, but I won't need to do it again.

Anyway, to better explain the problem for those still interested in finding a better solution:

  • I have 2 images, A and B, containing some scientific data.
  • A looks like a few dots with variable value (it's a matrix of in which each valued point represents where a event occurred and it's intensity)
  • B looks like a smoothed heatmap (it is the observed density of occurrences)
  • B looks just like if you applied a gaussian filter to A with a bit of semi-random noise.
  • We are approximating B by applying a gaussian filter with constant sigma to A. This sigma was chosen visually, but only works for a certain class of images.
  • I'm trying to obtain an optimal sigma for each image, so later I could find some relations of sigma and the class of event showed in each image.

Anyway, thanks for the help!

解决方案

Quick check: you probably really meant fmin(compare_images, init_guess, (a,b))?

If gaussian_filter behaves as you say, your function is piecewise constant, meaning that optimizers relying on derivatives (i.e. most of them) are out. You can try a global optimizer like anneal, or brute-force search over a sensible range of k's.

However, as you described the problem, in general there will only be a clear, global minimum of compare_images if b is a smoothed version of a. Your approach makes sense if you want to determine the amount of smoothing of a that makes both images most similar.

If the question is "how similar are the images", then I think pixelwise comparison (maybe with a bit of smoothing) is the way to go. Depending on what images we are talking about, it might be necessary to align the images first (e.g. for comparing photographs). Please clarify :-)

edit: Another idea that might help: rewrite compare_images so that it calculates two versions of smoothed-a -- one with sigma=floor(k) and one with ceil(k) (i.e. round k to the next-lower/higher int). Then calculate a_smooth = a_floor*(1-kfrac)+a_ceil*kfrac, with kfrac being the fractional part of k. This way the compare function becomes continuous w.r.t k.

Good Luck!

这篇关于如何在scipy.optimize函数上强制执行更大的步骤?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-11 17:15