我正在使用实时摄像头输出来更新MTKView上的CIImage。我的主要问题是,虽然我遇到的所有设置都是相同的,但是我有一个较大的,负面的性能差异,其中较旧的iPhone获得比较新的iPhone更好的CPU性能。
这是一篇冗长的帖子,但我决定将这些细节包括在内,因为它们可能对这个问题的原因很重要。请告诉我还能包括哪些内容。
下面,我有一个带有两个调试bool的captureOutput函数,我可以在运行时打开和关闭它。我用它来试图确定问题的原因。
applyLiveFilter - bool是否用CIFilter操纵CIImage。
updateMetalView - bool是否更新MTKView的CIImage。
// live output from camera
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection){
/*
Create CIImage from camera.
Here I save a few percent of CPU by using a function
to convert a sampleBuffer to a Metal texture, but
whether I use this or the commented out code
(without captureOutputMTLOptions) does not have
significant impact.
*/
guard let texture:MTLTexture = convertToMTLTexture(sampleBuffer: sampleBuffer) else{
return
}
var cameraImage:CIImage = CIImage(mtlTexture: texture, options: captureOutputMTLOptions)!
var transform: CGAffineTransform = .identity
transform = transform.scaledBy(x: 1, y: -1)
transform = transform.translatedBy(x: 0, y: -cameraImage.extent.height)
cameraImage = cameraImage.transformed(by: transform)
/*
// old non-Metal way of getting the ciimage from the cvPixelBuffer
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else
{
return
}
var cameraImage:CIImage = CIImage(cvPixelBuffer: pixelBuffer)
*/
var orientation = UIImage.Orientation.right
if(isFrontCamera){
orientation = UIImage.Orientation.leftMirrored
}
// apply filter to camera image
if debug_applyLiveFilter {
cameraImage = self.applyFilterAndReturnImage(ciImage: cameraImage, orientation: orientation, currentCameraRes:currentCameraRes!)
}
DispatchQueue.main.async(){
if debug_updateMetalView {
self.MTLCaptureView!.image = cameraImage
}
}
}
下面是两个手机切换上面讨论的不同bool组合的结果图表:
即使没有Metal视图的CIIMage更新并且没有应用过滤器,iPhone XS的CPU比iPhone 6S Plus高2%,这不是一个显着的开销,但让我怀疑相机如何捕获在设备之间是不同的。
我需要在这两个手机AVCaptureDevice的设置之间手动设置任何设置,包括activeFormat属性,以使它们在设备之间相同吗?
我现在的设置是:
if let captureDevice = AVCaptureDevice.default(for:AVMediaType.video) {
do {
try captureDevice.lockForConfiguration()
captureDevice.isSubjectAreaChangeMonitoringEnabled = true
captureDevice.focusMode = AVCaptureDevice.FocusMode.continuousAutoFocus
captureDevice.exposureMode = AVCaptureDevice.ExposureMode.continuousAutoExposure
captureDevice.unlockForConfiguration()
} catch {
// Handle errors here
print("There was an error focusing the device's camera")
}
}
我的MTKView基于Simon Gladman编写的代码,对性能进行了一些编辑,并在使用Apple建议的Core Animation将其放大到屏幕宽度之前缩放渲染。
class MetalImageView: MTKView
{
let colorSpace = CGColorSpaceCreateDeviceRGB()
var textureCache: CVMetalTextureCache?
var sourceTexture: MTLTexture!
lazy var commandQueue: MTLCommandQueue =
{
[unowned self] in
return self.device!.makeCommandQueue()
}()!
lazy var ciContext: CIContext =
{
[unowned self] in
return CIContext(mtlDevice: self.device!)
}()
override init(frame frameRect: CGRect, device: MTLDevice?)
{
super.init(frame: frameRect,
device: device ?? MTLCreateSystemDefaultDevice())
if super.device == nil
{
fatalError("Device doesn't support Metal")
}
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, self.device!, nil, &textureCache)
framebufferOnly = false
enableSetNeedsDisplay = true
isPaused = true
preferredFramesPerSecond = 30
}
required init(coder: NSCoder)
{
fatalError("init(coder:) has not been implemented")
}
// The image to display
var image: CIImage?
{
didSet
{
setNeedsDisplay()
}
}
override func draw(_ rect: CGRect)
{
guard var
image = image,
let targetTexture:MTLTexture = currentDrawable?.texture else
{
return
}
let commandBuffer = commandQueue.makeCommandBuffer()
let customDrawableSize:CGSize = drawableSize
let bounds = CGRect(origin: CGPoint.zero, size: customDrawableSize)
let originX = image.extent.origin.x
let originY = image.extent.origin.y
let scaleX = customDrawableSize.width / image.extent.width
let scaleY = customDrawableSize.height / image.extent.height
let scale = min(scaleX*IVScaleFactor, scaleY*IVScaleFactor)
image = image
.transformed(by: CGAffineTransform(translationX: -originX, y: -originY))
.transformed(by: CGAffineTransform(scaleX: scale, y: scale))
ciContext.render(image,
to: targetTexture,
commandBuffer: commandBuffer,
bounds: bounds,
colorSpace: colorSpace)
commandBuffer?.present(currentDrawable!)
commandBuffer?.commit()
}
}
我的AVCaptureSession(captureSession)和AVCaptureVideoDataOutput(videoOutput)设置如下:
func setupCameraAndMic(){
let backCamera = AVCaptureDevice.default(for:AVMediaType.video)
var error: NSError?
var videoInput: AVCaptureDeviceInput!
do {
videoInput = try AVCaptureDeviceInput(device: backCamera!)
} catch let error1 as NSError {
error = error1
videoInput = nil
print(error!.localizedDescription)
}
if error == nil &&
captureSession!.canAddInput(videoInput) {
guard CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, MetalDevice, nil, &textureCache) == kCVReturnSuccess else {
print("Error: could not create a texture cache")
return
}
captureSession!.addInput(videoInput)
setDeviceFrameRateForCurrentFilter(device:backCamera)
stillImageOutput = AVCapturePhotoOutput()
if captureSession!.canAddOutput(stillImageOutput!) {
captureSession!.addOutput(stillImageOutput!)
let q = DispatchQueue(label: "sample buffer delegate", qos: .default)
videoOutput.setSampleBufferDelegate(self, queue: q)
videoOutput.videoSettings = [
kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String: NSNumber(value: kCVPixelFormatType_32BGRA),
kCVPixelBufferMetalCompatibilityKey as String: true
]
videoOutput.alwaysDiscardsLateVideoFrames = true
if captureSession!.canAddOutput(videoOutput){
captureSession!.addOutput(videoOutput)
}
captureSession!.startRunning()
}
}
setDefaultFocusAndExposure()
}
视频和麦克风记录在两个独立的流中。麦克风和录制视频的细节已被遗漏,因为我的重点是现场摄像机输出的性能。
更新 - 我在GitHub上有一个简化的测试项目,可以更容易地测试我遇到的问题:https://github.com/PunchyBass/Live-Filter-test-project
从我的脑海中,你不是将梨与梨进行比较,即使你使用2.49 GHz的A12和1.85 GHz的A9运行,相机之间的差异也很大,即使你使用相同的参数XS的相机有几个功能需要更多的CPU资源(双摄像头,稳定,智能HDR等)。
对于这些消息来源感到抱歉,我试图找到这些功能的CPU成本指标,但遗憾的是,根据您的需要,我找不到这些信息与营销无关,当他们将其作为有史以来最好的相机销售时智能手机。
他们将它作为最好的处理器出售,我们不知道使用配备A9处理器的XS相机会发生什么,它可能会崩溃,我们永远不会知道......
PS ....您的指标是针对整个处理器还是针对所使用的核心?对于整个处理器,您还需要考虑设备可以执行的其他任务,对于单核,是200%的21%和600%的39%