我使用SBGM算法创建了一个视差图像,它使我获得了漂亮的图像。这是我的代码:

import numpy as np
import cv2
#load unrectified images
unimgR =cv2.imread("R.jpg")
unimgL =cv2.imread("L.jpg")
#load calibration from calibration file
calibration = np.load(r"C:\Users\XXX\PycharmProjects\rectify\Test3_OpenCV_Rectified.npz", allow_pickle=False)  # load variables from calibration file
imageSize = tuple(calibration["imageSize"])
leftMatrix = calibration["leftMatrix"]
leftDist = calibration["leftDist"]
leftMapX = calibration["leftMapX"]
leftMapY = calibration["leftMapY"]
leftROI = tuple(calibration["leftROI"])
rightMatrix = calibration["rightMatrix"]
rightDist = calibration["rightDist"]
rightMapX = calibration["rightMapX"]
rightMapY = calibration["rightMapY"]
rightROI = tuple(calibration["rightROI"])
disparityToDepthMap = calibration["disparityToDepthMap"]
# Rectify images (including monocular undistortion)
imgL = cv2.remap(unimgL, leftMapX, leftMapY, cv2.INTER_LINEAR)
imgR = cv2.remap(unimgR, rightMapX, rightMapY, cv2.INTER_LINEAR)
# SGBM Parameters
window_size = 15  # wsize default 3; 5; 7 for SGBM reduced size image; 15 for SGBM full size image (1300px and above); 5 Works nicely
left_matcher = cv2.StereoSGBM_create(
    minDisparity=0,
    numDisparities=160,  # max_disp has to be dividable by 16 f. E. HH 192, 256
    blockSize=5,
    P1=8 * 3 * window_size ** 2,
    # wsize default 3; 5; 7 for SGBM reduced size image; 15 for SGBM full size image (1300px and above); 5 Works nicely
    P2=32 * 3 * window_size ** 2,
    disp12MaxDiff=1,
    uniquenessRatio=15,
    speckleWindowSize=0,
    speckleRange=2,
    preFilterCap=63,
    mode=cv2.STEREO_SGBM_MODE_SGBM_3WAY
)
right_matcher = cv2.ximgproc.createRightMatcher(left_matcher)
# FILTER Parameters
lmbda = 80000
sigma = 1.2
visual_multiplier = 1.0
# Weighted least squares filter to fill sparse (unpopulated) areas of the disparity map
    # by aligning the images edges and propagating disparity values from high- to low-confidence regions
wls_filter = cv2.ximgproc.createDisparityWLSFilter(matcher_left=left_matcher)
wls_filter.setLambda(lmbda)
wls_filter.setSigmaColor(sigma)
# Get depth information/disparity map using SGBM
displ = left_matcher.compute(imgL, imgR)  # .astype(np.float32)/16
dispr = right_matcher.compute(imgR, imgL)  # .astype(np.float32)/16
displ = np.int16(displ)
dispr = np.int16(dispr)
filteredImg = wls_filter.filter(displ, imgL, None, dispr)  # important to put "imgL" here!!!
filteredImg = cv2.normalize(src=filteredImg, dst=filteredImg, beta=0, alpha=255, norm_type=cv2.NORM_MINMAX);
filteredImg = np.uint8(filteredImg)
print("Distance:", 0.12*0.006/displ[1000][500]) #depth= Baseline * focal-lens / disparity
cv2.imshow('Disparity Map', filteredImg)
cv2.waitKey()
cv2.destroyAllWindows()

我使用公式distance = Baseline * focal-lens / disparity
我的基线是12厘米,我的聚焦镜是6毫米

X,Y=1000,550应该有10m的距离,但是它给了我1.5550755939524837e-06
我不明白为什么会这样。 Here是图像。

最佳答案

视差图像似乎是正确的。但是对于深度/距离计算,您不应对基线和焦距进行硬编码。您宁可从校准矩阵中获取它。 Q矩阵包含基线。这主要是因为在校准过程中存在距离单位(cm / mm / m),并随后将其存储在校准矩阵中。

因此,我建议您从Q矩阵中选取它。

关于python - 我从视差图OpenCV SBGM得出的距离/距离很深,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/55363566/

10-15 22:50