我正在尝试从UIImage
捕获的视频帧中生成GPUImage
。我做了很多AVFoundation
视频工作,但我是新手使用GPUImage
。我有subclassed
GPUImageVideoCamera
并添加了这种方法,但UIImage
总是零。如果有人能告诉我在哪里出现了如此可怕的错误,我会非常感激!
- (void)processVideoSampleBuffer:( CMSampleBufferRef )sampleBuffer
{[super processVideoSampleBuffer:sampleBuffer]; //让GPUImage先进行处理
if (!self.thumbnailGenerated)
{
CMTime timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
NSLog(@"%f", (float)timestamp.value / timestamp.timescale);
self.thumbnailGenerated = YES;
dispatch_sync(dispatch_get_main_queue(), ^
{
// generate a preview frame from the last filter in the camera filter chain
UIImage *thumbnailImage = [UIImage imageWithCGImage:[[self.targets lastObject] newCGImageFromCurrentlyProcessedOutput]];
NSString *pathToMovie = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents/Thumbnail.png"];
[UIImagePNGRepresentation(thumbnailImage) writeToFile:pathToMovie atomically:YES];
});
}
}
我已经使用此代码生成第一帧的CGImageRef,用于缩略图
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:videoURl options:nil];
AVAssetImageGenerator *imageGenerator = [AVAssetImageGenerator assetImageGeneratorWithAsset:asset];
[imageGenerator setAppliesPreferredTrackTransform:YES];
NSData *videoData = [NSData dataWithContentsOfURL:asset.URL];
CGImageRef image = [imageGenerator copyCGImageAtTime:kCMTimeZero actualTime:nil error:&error];
您可以用一些实际值替换kCMTimeZero
以获得您想要的帧。
在那之后,你将不得不将你的CGImageRef
转换为UIImage
。
我不确定这是否有任何帮助,但我在处理视频时会收到缩略图。为此,我正在使用
videoInput --> someMyOperations --> fileOutput
someMyOperations --> imageOutput //imageOutput is PictureOutput()
videoInput.start() //that needs to be called!
imageOutput.saveNextFrameToUrl(coverUrl, format: .jpg) { file in
//here goes the code what to do with thumbnail
videoInput.cancel() //quite probably here you want this
}
这是猜测 - 我没有看到你的代码,但对我来说这是有效的。