我正在为iOS编写OpenGL应用,并且需要获取渲染场景的应用内屏幕截图。当我不使用多重采样时,一切正常。但是,当我打开多重采样时,glReadPixels不会返回正确的数据(场景绘制正确-多重采样的图形质量要好得多)。

我已经在SO以及其他一些地方检查了一堆类似的问题,但是没有一个解决我的问题的,因为我已经按照建议的方式进行了处理:

  • 在解析缓冲区之后,但在呈现渲染缓冲区之前,我正在截屏。
  • glReadPixels不返回错误。
  • 我什至尝试将kEAGLDrawablePropertyRetainedBacking设置为YES并在呈现缓冲区后拍摄屏幕截图-也不起作用。
  • 我支持OpenGLES 1.x呈现API(使用kEAGLRenderingAPIOpenGLES1初始化的上下文)

  • 基本上我没有主意什么是错的。关于SO的问题是我的最后选择。

    这是相关的源代码:

    创建帧缓冲区
    - (BOOL)createFramebuffer
    {
    
        glGenFramebuffersOES(1, &viewFramebuffer);
        glGenRenderbuffersOES(1, &viewRenderbuffer);
    
        glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
        glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
        [context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(CAEAGLLayer*)self.layer];
        glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, viewRenderbuffer);
    
        glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
        glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
    
        // Multisample support
    
        glGenFramebuffersOES(1, &sampleFramebuffer);
        glBindFramebufferOES(GL_FRAMEBUFFER_OES, sampleFramebuffer);
    
        glGenRenderbuffersOES(1, &sampleColorRenderbuffer);
        glBindRenderbufferOES(GL_RENDERBUFFER_OES, sampleColorRenderbuffer);
        glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER_OES, 4, GL_RGBA8_OES, backingWidth, backingHeight);
        glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, sampleColorRenderbuffer);
    
        glGenRenderbuffersOES(1, &sampleDepthRenderbuffer);
        glBindRenderbufferOES(GL_RENDERBUFFER_OES, sampleDepthRenderbuffer);
        glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER_OES, 4, GL_DEPTH_COMPONENT16_OES, backingWidth, backingHeight);
        glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, sampleDepthRenderbuffer);
    
        // End of multisample support
    
        if(glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES) {
            NSLog(@"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES));
            return NO;
        }
    
        return YES;
    }
    

    解决缓冲区部分并拍摄快照
        glBindFramebufferOES(GL_DRAW_FRAMEBUFFER_APPLE, viewFramebuffer);
        glBindFramebufferOES(GL_READ_FRAMEBUFFER_APPLE, sampleFramebuffer);
        glResolveMultisampleFramebufferAPPLE();
        [self checkGlError];
    
        //glFinish();
    
        if (capture)
            captureImage = [self snapshot:self];
    
        const GLenum discards[]  = {GL_COLOR_ATTACHMENT0_OES,GL_DEPTH_ATTACHMENT_OES};
        glDiscardFramebufferEXT(GL_READ_FRAMEBUFFER_APPLE,2,discards);
    
        glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
    
        [context presentRenderbuffer:GL_RENDERBUFFER_OES];
    

    快照方法(基本上从Apple文档复制)
    - (UIImage*)snapshot:(UIView*)eaglview
    {
    
        // Bind the color renderbuffer used to render the OpenGL ES view
        // If your application only creates a single color renderbuffer which is already bound at this point,
        // this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
        // Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
        glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
    
    
        NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
        NSInteger dataLength = width * height * 4;
        GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
    
        // Read pixel data from the framebuffer
        glPixelStorei(GL_PACK_ALIGNMENT, 4);
        [self checkGlError];
        glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
        [self checkGlError];
    
        // Create a CGImage with the pixel data
        // If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
        // otherwise, use kCGImageAlphaPremultipliedLast
        CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
        CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
        CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
                                    ref, NULL, true, kCGRenderingIntentDefault);
    
        // OpenGL ES measures data in PIXELS
        // Create a graphics context with the target size measured in POINTS
        NSInteger widthInPoints, heightInPoints;
        if (NULL != UIGraphicsBeginImageContextWithOptions) {
            // On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
            // Set the scale parameter to your OpenGL ES view's contentScaleFactor
            // so that you get a high-resolution snapshot when its value is greater than 1.0
            CGFloat scale = eaglview.contentScaleFactor;
            widthInPoints = width / scale;
            heightInPoints = height / scale;
            UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
        }
        else {
            // On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
            widthInPoints = width;
            heightInPoints = height;
            UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
        }
    
        CGContextRef cgcontext = UIGraphicsGetCurrentContext();
    
        // UIKit coordinate system is upside down to GL/Quartz coordinate system
        // Flip the CGImage by rendering it to the flipped bitmap context
        // The size of the destination area is measured in POINTS
        CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
        CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
    
        // Retrieve the UIImage from the current context
        UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
    
        UIGraphicsEndImageContext();
    
        // Clean up
        free(data);
        CFRelease(ref);
        CFRelease(colorspace);
        CGImageRelease(iref);
    
        return image;
    }
    

    最佳答案

    通过将glResolveMultisampleFramebufferAPPLE绑定为绘制帧缓冲区和将viewFramebuffer绑定为读取帧缓冲区后,通过执行sampleFramebuffer,可以照常解析多样本缓冲区。但是您是否还打算将viewFramebuffer绑定为已读帧缓冲区(glBindFramebuffer(GL_READ_FRAMEBUFFER, viewFramebuffer)),然后再绑定glReadPixels呢? glReadPixels将始终从当前绑定的读取帧缓冲区读取,并且如果您在多次采样解析后未更改此绑定,它将仍然是多次采样帧缓冲区,而不是默认帧缓冲区。

    我还发现您的glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer)-调用非常烦人,因为这实际上并没有做任何有意义的事情,当前绑定的renderbuffer仅与工作在renderbuffers上的函数有关(实际上仅是glRenderbufferStorage)(但也可能是ES对其做了有意义的事情并绑定它,以便[context presentRenderbuffer:GL_RENDERBUFFER_OES]正常工作)。但是,尽管如此,也许您认为此绑定还控制了glReadPixels将从其读取的缓冲区,但事实并非如此,它将始终从绑定至GL_READ_FRAMEBUFFER的当前帧缓冲区中读取。

    关于ios - glReadPixels通过多次采样返回零,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/16942625/

    10-09 05:35