我在从CVPixelBuffer获取UIIMage时遇到一些问题。这就是我正在尝试的:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(imageDataSampleBuffer);
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, imageDataSampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(NSDictionary *)attachments];
if (attachments)
CFRelease(attachments);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
if (width && height) { // test to make sure we have valid dimensions
UIImage *image = [[UIImage alloc] initWithCIImage:ciImage];
UIImageView *lv = [[UIImageView alloc] initWithFrame:self.view.frame];
lv.contentMode = UIViewContentModeScaleAspectFill;
self.lockedView = lv;
[lv release];
self.lockedView.image = image;
[image release];
}
[ciImage release];
height
和width
都已正确设置为相机的分辨率。 image
已创建,但我似乎是黑色的(或也许是透明的?)。我不太明白问题出在哪里。任何想法将不胜感激。
首先,所有与您的问题没有直接关系的明显内容:AVCaptureVideoPreviewLayer
是将数据从任一摄像机传输到独立视图的最便宜的方法,前提是这是数据的来源,而您却没有立即计划对其进行修改。您无需进行任何操作,预览层直接连接到AVCaptureSession
并进行自我更新。
我必须承认对中心问题缺乏信心。 CIImage
与其他两种类型的图像之间存在语义差异– CIImage
是图像的配方,不一定要有像素作为后盾。可以是“从此处获取像素,像这样进行变换,应用此滤镜,像这样进行变换,与该其他图像合并,应用该滤镜”。在您选择渲染CIImage
之前,系统不知道它是什么样。它也不固有地知道对其进行栅格化的适当范围。
UIImage
旨在仅包装CIImage
。它不会将其转换为像素。大概UIImageView
应该可以实现这一点,但是如果是这样,那么我似乎找不到适合的输出矩形的位置。
我通过以下方法成功避开了问题:
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext
createCGImage:ciImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(pixelBuffer),
CVPixelBufferGetHeight(pixelBuffer))];
UIImage *uiImage = [UIImage imageWithCGImage:videoImage];
CGImageRelease(videoImage);
带有明显的机会来指定输出矩形。我敢肯定,没有使用CGImage
作为中介的方法,因此请不要以为这种解决方案是最佳做法。
在Swift中尝试这个。
Swift 4.2:
import VideoToolbox
extension UIImage {
public convenience init?(pixelBuffer: CVPixelBuffer) {
var cgImage: CGImage?
VTCreateCGImageFromCVPixelBuffer(pixelBuffer, nil, &cgImage)
guard let cgImage = cgImage else {
return nil
}
self.init(cgImage: cgImage)
}
}
Swift 5:
import VideoToolbox
extension UIImage {
public convenience init?(pixelBuffer: CVPixelBuffer) {
var cgImage: CGImage?
VTCreateCGImageFromCVPixelBuffer(pixelBuffer, options: nil, imageOut: &cgImage)
guard let cgImage = cgImage else {
return nil
}
self.init(cgImage: cgImage)
}
}
注意:这仅适用于RGB像素缓冲区,不适用于灰度。
获得UIImage的另一种方法。速度至少快10倍:
int w = CVPixelBufferGetWidth(pixelBuffer);
int h = CVPixelBufferGetHeight(pixelBuffer);
int r = CVPixelBufferGetBytesPerRow(pixelBuffer);
int bytesPerPixel = r/w;
unsigned char *buffer = CVPixelBufferGetBaseAddress(pixelBuffer);
UIGraphicsBeginImageContext(CGSizeMake(w, h));
CGContextRef c = UIGraphicsGetCurrentContext();
unsigned char* data = CGBitmapContextGetData(c);
if (data != NULL) {
int maxY = h;
for(int y = 0; y<maxY; y++) {
for(int x = 0; x<w; x++) {
int offset = bytesPerPixel*((w*y)+x);
data[offset] = buffer[offset]; // R
data[offset+1] = buffer[offset+1]; // G
data[offset+2] = buffer[offset+2]; // B
data[offset+3] = buffer[offset+3]; // A
}
}
}
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
除非您的图像数据是需要摇晃或转换的某种不同格式-我建议您不要增加任何内容...只需使用memcpy将数据存入您的上下文存储区,如下所示:
//not here... unsigned char *buffer = CVPixelBufferGetBaseAddress(pixelBuffer);
UIGraphicsBeginImageContext(CGSizeMake(w, h));
CGContextRef c = UIGraphicsGetCurrentContext();
void *ctxData = CGBitmapContextGetData(c);
// MUST READ-WRITE LOCK THE PIXEL BUFFER!!!!
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *pxData = CVPixelBufferGetBaseAddress(pixelBuffer);
memcpy(ctxData, pxData, 4 * w * h);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
... and so on...
以前的方法使我发生CG栅格数据泄漏。这种转换方法对我没有泄漏:
@autoreleasepool {
CGImageRef cgImage = NULL;
OSStatus res = CreateCGImageFromCVPixelBuffer(pixelBuffer,&cgImage);
if (res == noErr){
UIImage *image= [UIImage imageWithCGImage:cgImage scale:1.0 orientation:UIImageOrientationUp];
}
CGImageRelease(cgImage);
}
static OSStatus CreateCGImageFromCVPixelBuffer(CVPixelBufferRef pixelBuffer, CGImageRef *imageOut)
{
OSStatus err = noErr;
OSType sourcePixelFormat;
size_t width, height, sourceRowBytes;
void *sourceBaseAddr = NULL;
CGBitmapInfo bitmapInfo;
CGColorSpaceRef colorspace = NULL;
CGDataProviderRef provider = NULL;
CGImageRef image = NULL;
sourcePixelFormat = CVPixelBufferGetPixelFormatType( pixelBuffer );
if ( kCVPixelFormatType_32ARGB == sourcePixelFormat )
bitmapInfo = kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipFirst;
else if ( kCVPixelFormatType_32BGRA == sourcePixelFormat )
bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst;
else
return -95014; // only uncompressed pixel formats
sourceRowBytes = CVPixelBufferGetBytesPerRow( pixelBuffer );
width = CVPixelBufferGetWidth( pixelBuffer );
height = CVPixelBufferGetHeight( pixelBuffer );
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
sourceBaseAddr = CVPixelBufferGetBaseAddress( pixelBuffer );
colorspace = CGColorSpaceCreateDeviceRGB();
CVPixelBufferRetain( pixelBuffer );
provider = CGDataProviderCreateWithData( (void *)pixelBuffer, sourceBaseAddr, sourceRowBytes * height, ReleaseCVPixelBuffer);
image = CGImageCreate(width, height, 8, 32, sourceRowBytes, colorspace, bitmapInfo, provider, NULL, true, kCGRenderingIntentDefault);
if ( err && image ) {
CGImageRelease( image );
image = NULL;
}
if ( provider ) CGDataProviderRelease( provider );
if ( colorspace ) CGColorSpaceRelease( colorspace );
*imageOut = image;
return err;
}
static void ReleaseCVPixelBuffer(void *pixel, const void *data, size_t size)
{
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)pixel;
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CVPixelBufferRelease( pixelBuffer );
}
一种现代的解决方案是
let image = UIImage(ciImage: CIImage(cvPixelBuffer: YOUR_BUFFER))