问题描述
我有此代码(此):
void HessianDetector::detectOctaveKeypoints(const Mat &firstLevel, ...)
{
vector<Mat> blurs (par.numberOfScales+3, Mat());
blurs[1] = firstLevel;
for (int i = 1; i < par.numberOfScales+2; i++){
float sigma = par.sigmas[i]* sqrt(sigmaStep * sigmaStep - 1.0f);
blurs[i+1] = gaussianBlur(blurs[i], sigma);
}
...
位置:
Mat gaussianBlur(const Mat input, const float sigma)
{
Mat ret(input.rows, input.cols, input.type());
int size = (int)(2.0 * 3.0 * sigma + 1.0); if (size % 2 == 0) size++;
GaussianBlur(input, ret, Size(size, size), sigma, sigma, BORDER_REPLICATE);
return ret;
}
因此,如您所见,每个blurs[i+1]
都依赖于blurs[i]
,因此无法并行化.我的问题是:是否有等效的方法来获得相同的结果,但使用firstLevel
而不是blurs[i]
?所以它看起来应该像这样:
So, as you can see, each blurs[i+1]
depends on blurs[i]
, so it cannot be parallelized. My question is: is there an equivalent way to obtain the same result but using firstLevel
instead of blurs[i]
? So it should so look something like:
for (int i = 1; i < par.numberOfScales+2; i++){
float sigma = //something;
blurs[i+1] = gaussianBlur(firstLevel, sigma);
}
有可能吗?
此答案让我认为这是有可能的,但我不明白如何实现此目标:
This answer let me think that it's possible, but I can't understand how I can implement this:
推荐答案
这是可能的(您可以并行化).我遇到了完全相同的问题,并以这种方式解决了(请参阅我对这个问题的回答,以及python代码).
This is possible (you can parallelize). I had exactly the same issue, and solved it this way (see my answer to that problem, with python code).
https://dsp.stackexchange.com/questions/667/image -pyramid-without-decimation/55654#55654
这篇关于高斯模糊的并行链的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!