本文介绍了如何在iOS中将kCVPixelFormatType_420YpCbCr8BiPlanarFullRange缓冲区转换为UIImage的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图在原始帖子中回答这个问题但不会让我这么做。希望有更多权限的人可以将其合并到原始问题中。

I tried to answer this in the original thread however SO would not let me. Hopefully someone with more authority can merge this into the original question.

好的,这是一个更完整的答案。首先,设置捕获:

OK here is a more complete answer. First, setup the capture:

// Create capture session
self.captureSession = [[AVCaptureSession alloc] init];

[self.captureSession setSessionPreset:AVCaptureSessionPresetPhoto];

// Setup capture input
self.inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:self.inputDevice
                                                                           error:nil];
[self.captureSession addInput:captureInput];

// Setup video processing (capture output)
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
// Don't add frames to the queue if frames are already processing
captureOutput.alwaysDiscardsLateVideoFrames = YES;

// Create a serial queue to handle processing of frames
_videoQueue = dispatch_queue_create("cameraQueue", NULL);
[captureOutput setSampleBufferDelegate:self queue:_videoQueue];

// Set the video output to store frame in YUV
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;

NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
[self.captureSession addOutput:captureOutput];

现在可以执行委托/回调:

OK now the implementation for the delegate/callback:

- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
   fromConnection:(AVCaptureConnection *)connection
{

// Create autorelease pool because we are not in the main_queue
@autoreleasepool {

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    //Lock the imagebuffer
    CVPixelBufferLockBaseAddress(imageBuffer,0);

    // Get information about the image
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);

    //    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);

    CVPlanarPixelBufferInfo_YCbCrBiPlanar *bufferInfo = (CVPlanarPixelBufferInfo_YCbCrBiPlanar *)baseAddress;

    // This just moved the pointer past the offset
    baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);


    // convert the image
    _prefImageView.image = [self makeUIImage:baseAddress bufferInfo:bufferInfo width:width height:height bytesPerRow:bytesPerRow];

    // Update the display with the captured image for DEBUG purposes
    dispatch_async(dispatch_get_main_queue(), ^{
        [_myMainView.yUVImage setImage:_prefImageView.image];
    });        
}

最后这里是从YUV转换为UIImage的方法

and finally here is the method to convert from YUV to a UIImage

- (UIImage *)makeUIImage:(uint8_t *)inBaseAddress bufferInfo:(CVPlanarPixelBufferInfo_YCbCrBiPlanar *)inBufferInfo width:(size_t)inWidth height:(size_t)inHeight bytesPerRow:(size_t)inBytesPerRow {

NSUInteger yPitch = EndianU32_BtoN(inBufferInfo->componentInfoY.rowBytes);

uint8_t *rgbBuffer = (uint8_t *)malloc(inWidth * inHeight * 4);
uint8_t *yBuffer = (uint8_t *)inBaseAddress;
uint8_t val;
int bytesPerPixel = 4;

// for each byte in the input buffer, fill in the output buffer with four bytes
// the first byte is the Alpha channel, then the next three contain the same
// value of the input buffer
for(int y = 0; y < inHeight*inWidth; y++)
{
    val = yBuffer[y];
    // Alpha channel
    rgbBuffer[(y*bytesPerPixel)] = 0xff;

    // next three bytes same as input
    rgbBuffer[(y*bytesPerPixel)+1] = rgbBuffer[(y*bytesPerPixel)+2] =  rgbBuffer[y*bytesPerPixel+3] = val;
}

// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

CGContextRef context = CGBitmapContextCreate(rgbBuffer, yPitch, inHeight, 8,
                                             yPitch*bytesPerPixel, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);

CGImageRef quartzImage = CGBitmapContextCreateImage(context);

CGContextRelease(context);
CGColorSpaceRelease(colorSpace);

UIImage *image = [UIImage imageWithCGImage:quartzImage];

CGImageRelease(quartzImage);
free(rgbBuffer);
return  image;
}

您还需要 #importEndian。 h

请注意,对CGBitmapContextCreate的调用比我预期的要复杂得多。我对视频处理并不是很精明,但是这个电话让我感到困惑了一段时间。然后当它最终起作用时它就像魔术一样。

Note that the call to CGBitmapContextCreate is much more tricky that I expected. I'm not very savvy on video processing at all however this call stumped me for a while. Then when it finally worked it was like magic.

推荐答案

背景信息:@Michaelg的版本只能访问y缓冲区,所以你只能得到亮度而不是颜色。如果缓冲区中的音高和像素数不匹配(无论出于何种原因填充行尾的填充字节),它也会有缓冲区溢出错误。这里发生的背景是这是一种平面图像格式,它为每个像素分配一个字节用于亮度,每4个像素分配2个字节用于颜色信息。这些存储为平面而不是连续存储在存储器中,其中Y或亮度平面具有其自己的存储块,并且CbCr或彩色平面也具有其自己的存储块。 CbCr平面由Y平面的1/4数量的样本(半高和宽度)组成,并且CbCr平面中的每个像素对应于Y平面中的2×2块。希望这个背景有所帮助。

Background info: @Michaelg's version only accesses the y buffer so you only get luminance and not color. It also has a buffer overrun bug if the pitch in the buffers and the number of pixels don't match (padding bytes at the end of a line for whatever reason). The background on what is occurring here is that this is a planar image format which allocates one byte per pixel for luminance and 2 bytes per 4 pixels for color information. Rather than being stored continuously in memory these are stored as "planes" where the Y or luminance plane has its own block of memory and the CbCr or color plane also has its own block of memory. The CbCr plane consists of 1/4 the number of samples (half height and width) of the Y plane and each pixel in the CbCr plane corresponds to a 2x2 block in the Y plane. Hopefully this background helps.

编辑:他的版本和我的旧版本都有可能超出缓冲区,如果图像缓冲区中的行有填充字节,则无法工作每一行的结尾。此外,我的cbcr平面缓冲区没有使用正确的偏移量创建。要正确执行此操作,您应始终使用核心视频函数,如CVPixelBufferGetWidthOfPlane和CVPixelBufferGetBaseAddressOfPlane。这将确保您正确地解释缓冲区,无论缓冲区是否有标题以及是否搞砸指针数学,它都将起作用。您应该使用Apple函数的行大小和函数中的缓冲区基址。这些内容记录在:请注意,虽然这个版本在某些地方使用了Apple的功能和一些标题的使用,但最好只使用Apple的功能。我将来可能会更新它,根本不使用标题。

edit: Both his version and my old version had the potential to overrun buffers and would not work if the rows in the image buffer have padding bytes at the end of each row. Furthermore my cbcr plane buffer was not created with the correct offset. To do this correctly you should always use the core video functions such as CVPixelBufferGetWidthOfPlane and CVPixelBufferGetBaseAddressOfPlane. This will ensure that you are correctly interpreting the buffer and it will work regardless of whether the buffer has a header and whether you screw up the pointer math. You should use the row sizes from Apple's functions and the buffer base address from their functions also. These are documented at: https://developer.apple.com/library/prerelease/ios/documentation/QuartzCore/Reference/CVPixelBufferRef/index.html Note that while this version here makes some use of Apple's functions and some use of the header it is best to only use Apple's functions. I may update this in the future to not use the header at all.

这会将kcvpixelformattype_420ypcbcr8biplanarfullrange缓冲区缓冲区转换为可以使用的UIImage。

This will convert a kcvpixelformattype_420ypcbcr8biplanarfullrange buffer buffer into a UIImage which you can then use.

首先,设置捕获:

// Create capture session
self.captureSession = [[AVCaptureSession alloc] init];

[self.captureSession setSessionPreset:AVCaptureSessionPresetPhoto];

// Setup capture input
self.inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:self.inputDevice
                                                                           error:nil];
[self.captureSession addInput:captureInput];

// Setup video processing (capture output)
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
// Don't add frames to the queue if frames are already processing
captureOutput.alwaysDiscardsLateVideoFrames = YES;

// Create a serial queue to handle processing of frames
_videoQueue = dispatch_queue_create("cameraQueue", NULL);
[captureOutput setSampleBufferDelegate:self queue:_videoQueue];

// Set the video output to store frame in YUV
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;

NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
[self.captureSession addOutput:captureOutput];

现在可以执行委托/回调:

OK now the implementation for the delegate/callback:

- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
   fromConnection:(AVCaptureConnection *)connection
{

// Create autorelease pool because we are not in the main_queue
@autoreleasepool {

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    //Lock the imagebuffer
    CVPixelBufferLockBaseAddress(imageBuffer,0);

    // Get information about the image
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);

    //    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);

    CVPlanarPixelBufferInfo_YCbCrBiPlanar *bufferInfo = (CVPlanarPixelBufferInfo_YCbCrBiPlanar *)baseAddress;
    //get the cbrbuffer base address
    uint8_t* cbrBuff = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1);
    // This just moved the pointer past the offset
    baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);


    // convert the image
    _prefImageView.image = [self makeUIImage:baseAddress cBCrBuffer:cbrBuff bufferInfo:bufferInfo width:width height:height bytesPerRow:bytesPerRow];

    // Update the display with the captured image for DEBUG purposes
    dispatch_async(dispatch_get_main_queue(), ^{
        [_myMainView.yUVImage setImage:_prefImageView.image];
    });        
}

最后这里是从YUV转换为UIImage的方法

and finally here is the method to convert from YUV to a UIImage

- (UIImage *)makeUIImage:(uint8_t *)inBaseAddress cBCrBuffer:(uint8_t*)cbCrBuffer bufferInfo:(CVPlanarPixelBufferInfo_YCbCrBiPlanar *)inBufferInfo width:(size_t)inWidth height:(size_t)inHeight bytesPerRow:(size_t)inBytesPerRow {

     NSUInteger yPitch = EndianU32_BtoN(inBufferInfo->componentInfoY.rowBytes);
 NSUInteger cbCrOffset = EndianU32_BtoN(inBufferInfo->componentInfoCbCr.offset);
 uint8_t *rgbBuffer = (uint8_t *)malloc(inWidth * inHeight * 4);
 NSUInteger cbCrPitch = EndianU32_BtoN(inBufferInfo->componentInfoCbCr.rowBytes);
 uint8_t *yBuffer = (uint8_t *)inBaseAddress;
 //uint8_t *cbCrBuffer = inBaseAddress + cbCrOffset;
 uint8_t val;
 int bytesPerPixel = 4;

 for(int y = 0; y < inHeight; y++)
 {
 uint8_t *rgbBufferLine = &rgbBuffer[y * inWidth * bytesPerPixel];
 uint8_t *yBufferLine = &yBuffer[y * yPitch];
 uint8_t *cbCrBufferLine = &cbCrBuffer[(y >> 1) * cbCrPitch];

 for(int x = 0; x < inWidth; x++)
 {
 int16_t y = yBufferLine[x];
 int16_t cb = cbCrBufferLine[x & ~1] - 128; 
 int16_t cr = cbCrBufferLine[x | 1] - 128;

 uint8_t *rgbOutput = &rgbBufferLine[x*bytesPerPixel];

     int16_t r = (int16_t)roundf( y + cr *  1.4 );
     int16_t g = (int16_t)roundf( y + cb * -0.343 + cr * -0.711 );
     int16_t b = (int16_t)roundf( y + cb *  1.765);

 //ABGR
 rgbOutput[0] = 0xff;
     rgbOutput[1] = clamp(b);
     rgbOutput[2] = clamp(g);
     rgbOutput[3] = clamp(r);
 }
 }

 // Create a device-dependent RGB color space
 CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
 NSLog(@"ypitch:%lu inHeight:%zu bytesPerPixel:%d",(unsigned long)yPitch,inHeight,bytesPerPixel);
 NSLog(@"cbcrPitch:%lu",cbCrPitch);
 CGContextRef context = CGBitmapContextCreate(rgbBuffer, inWidth, inHeight, 8,
 inWidth*bytesPerPixel, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);

 CGImageRef quartzImage = CGBitmapContextCreateImage(context);

 CGContextRelease(context);
 CGColorSpaceRelease(colorSpace);

 UIImage *image = [UIImage imageWithCGImage:quartzImage];

 CGImageRelease(quartzImage);
 free(rgbBuffer);
 return  image;
 }

您还需要 #importEndian。 h和define #define clamp(a)(a> 255?255:(a< 0?0:a));

You will also need to #import "Endian.h" and the define #define clamp(a) (a>255?255:(a<0?0:a));

请注意,对CGBitmapContextCreate的调用比我预期的要复杂得多。我对视频处理并不是很精明,但是这个电话让我感到困惑了一段时间。然后,当它最终奏效时,它就像魔术一样。

Note that the call to CGBitmapContextCreate is much more tricky that I expected. I'm not very savvy on video processing at all however this call stumped me for a while. Then when it finally worked it was like magic.

这篇关于如何在iOS中将kCVPixelFormatType_420YpCbCr8BiPlanarFullRange缓冲区转换为UIImage的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-16 07:10