我正在制作一个应用程序,用于通过opencv2和来自Internet的一些转换代码来编辑图像的HSL色彩空间。

我想原始图像的色彩空间是RGB,所以这是我的想法:

  • 将UIImage转换为cvMat
  • 将色彩空间从BGR转换为HLS。
  • 循环遍历所有像素点以获得相应的HLS值。
  • 自定义算法。
  • 将HLS值更改重写为cvMat
  • 将cvMat转换为UIImage

  • 这是我的代码:

    UIImage和cvMat之间的转换

    参考:https://stackoverflow.com/a/10254561/1677041
    #import <UIKit/UIKit.h>
    #import <opencv2/core/core.hpp>
    
    UIImage *UIImageFromCVMat(cv ::Mat cvMat)
    {
        NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
    
        CGColorSpaceRef colorSpace;
        CGBitmapInfo bitmapInfo;
    
        if (cvMat.elemSize() == 1) {
            colorSpace = CGColorSpaceCreateDeviceGray();
            bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
        } else {
            colorSpace = CGColorSpaceCreateDeviceRGB();
    #if 0
            // OpenCV defaults to either BGR or ABGR. In CoreGraphics land,
            // this means using the "32Little" byte order, and potentially
            // skipping the first pixel. These may need to be adjusted if the
            // input matrix uses a different pixel format.
            bitmapInfo = kCGBitmapByteOrder32Little | (
                cvMat.elemSize() == 3? kCGImageAlphaNone : kCGImageAlphaNoneSkipFirst
            );
    #else
            bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
    #endif
        }
    
        CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
    
        // Creating CGImage from cv::Mat
        CGImageRef imageRef = CGImageCreate(
            cvMat.cols,                 // width
            cvMat.rows,                 // height
            8,                          // bits per component
            8 * cvMat.elemSize(),       // bits per pixel
            cvMat.step[0],              // bytesPerRow
            colorSpace,                 // colorspace
            bitmapInfo,                 // bitmap info
            provider,                   // CGDataProviderRef
            NULL,                       // decode
            false,                      // should interpolate
            kCGRenderingIntentDefault   // intent
        );
    
        // Getting UIImage from CGImage
        UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
        CGImageRelease(imageRef);
        CGDataProviderRelease(provider);
        CGColorSpaceRelease(colorSpace);
    
        return finalImage;
    }
    
    cv::Mat cvMatWithImage(UIImage *image)
    {
        CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
        size_t numberOfComponents = CGColorSpaceGetNumberOfComponents(colorSpace);
        CGFloat cols = image.size.width;
        CGFloat rows = image.size.height;
    
        cv::Mat cvMat(rows, cols, CV_8UC4);  // 8 bits per component, 4 channels
        CGBitmapInfo bitmapInfo = kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault;
    
        // check whether the UIImage is greyscale already
        if (numberOfComponents == 1) {
            cvMat = cv::Mat(rows, cols, CV_8UC1);  // 8 bits per component, 1 channels
            bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
        }
    
        CGContextRef contextRef = CGBitmapContextCreate(
            cvMat.data,         // Pointer to backing data
            cols,               // Width of bitmap
            rows,               // Height of bitmap
            8,                  // Bits per component
            cvMat.step[0],      // Bytes per row
            colorSpace,         // Colorspace
            bitmapInfo          // Bitmap info flags
        );
    
        CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
        CGContextRelease(contextRef);
    
        return cvMat;
    }
    

    我单独测试了这两个功能,并确认它们可以正常工作。

    有关转换的核心操作:
    /// Generate a new image based on specified HSL value changes.
    /// @param h_delta h value in [-360, 360]
    /// @param s_delta s value in [-100, 100]
    /// @param l_delta l value in [-100, 100]
    - (void)adjustImageWithH:(CGFloat)h_delta S:(CGFloat)s_delta L:(CGFloat)l_delta completion:(void (^)(UIImage *resultImage))completion
    {
        dispatch_async(dispatch_get_global_queue(0, 0), ^{
            Mat original = cvMatWithImage(self.originalImage);
            Mat image;
    
            cvtColor(original, image, COLOR_BGR2HLS);
            // https://docs.opencv.org/2.4/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html#the-efficient-way
    
            // accept only char type matrices
            CV_Assert(image.depth() == CV_8U);
    
            int channels = image.channels();
    
            int nRows = image.rows;
            int nCols = image.cols * channels;
    
            int y, x;
    
            for (y = 0; y < nRows; ++y) {
                for (x = 0; x < nCols; ++x) {
                    // https://answers.opencv.org/question/30547/need-to-know-the-hsv-value/
                    // https://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html?#cvtcolor
                    Vec3b hls = original.at<Vec3b>(y, x);
                    uchar h = hls.val[0], l = hls.val[1], s = hls.val[2];
    
    //              h = MAX(0, MIN(360, h + h_delta));
    //              s = MAX(0, MIN(100, s + s_delta));
    //              l = MAX(0, MIN(100, l + l_delta));
    
                     printf("(%02d, %02d):\tHSL(%d, %d, %d)\n", x, y, h, s, l); // <= Label 1
    
                     original.at<Vec3b>(y, x)[0] = h;
                     original.at<Vec3b>(y, x)[1] = l;
                     original.at<Vec3b>(y, x)[2] = s;
                }
            }
    
            cvtColor(image, image, COLOR_HLS2BGR);
            UIImage *resultImage = UIImageFromCVMat(image);
    
            dispatch_async(dispatch_get_main_queue(), ^ {
                if (completion) {
                    completion(resultImage);
                }
            });
        });
    }
    

    问题是:
  • 为什么HLS值超出我的预期范围?它像RGB范围一样显示为[0,255],是cvtColor用法错误吗?
  • 我应该在两个for循环中使用Vec3b吗?或Vec3i代替?
  • 我的想法上面有问题吗?

  • 更新:
    Vec3b hls = original.at<Vec3b>(y, x);
    uchar h = hls.val[0], l = hls.val[1], s = hls.val[2];
    
    // Remap the hls value range to human-readable range (0~360, 0~1.0, 0~1.0).
    // https://docs.opencv.org/master/de/d25/imgproc_color_conversions.html
    float fh, fl, fs;
    fh = h * 2.0;
    fl = l / 255.0;
    fs = s / 255.0;
    
    fh = MAX(0, MIN(360, fh + h_delta));
    fl = MAX(0, MIN(1, fl + l_delta / 100));
    fs = MAX(0, MIN(1, fs + s_delta / 100));
    
    // Convert them back
    fh /= 2.0;
    fl *= 255.0;
    fs *= 255.0;
    
    printf("(%02d, %02d):\tHSL(%d, %d, %d)\tHSL2(%.4f, %.4f, %.4f)\n", x, y, h, s, l, fh, fs, fl);
    
    original.at<Vec3b>(y, x)[0] = short(fh);
    original.at<Vec3b>(y, x)[1] = short(fl);
    original.at<Vec3b>(y, x)[2] = short(fs);
    

    最佳答案

    1)看一下this,特别是RGB-> HLS的一部分。当源图像为8位时,它将从0-255开始,但是如果您使用 float 图像,则可能具有不同的值。



    V应该是L,文档中有错字

    您可以将RGB / BGR图像转换为浮点图像,然后将获得完整值。即S和L为0到1,H为0-360。

    但是您必须小心地将其转换回去。

    2)Vec3b是无符号的8位图像(CV_8U),而Vec3i是整数(CV_32S)。知道这一点,取决于图像的类型。正如您所说的,它是从0-255开始的,应该是无符号的8位,应使用Vec3b。如果使用另一个,则每个像素将获得32位,并且使用此大小来计算像素阵列中的位置...因此它可能会出现超出范围,分割错误或随机问题的情况。

    如有疑问,请随时发表评论

    关于ios - 使用HSL转换编辑RGB色彩空间图像失败,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/58318464/

    10-12 23:09