在 iPad 上无法将影片录制添加到 AVCamFilter

问题描述 投票:0回答:1

我正在使用 Apple 的 AVCamFilter 演示应用程序。不幸的是,它不包括电影录制功能,所以我尝试在照片输出的正下方、类的顶部创建一个电影输出:

let movieOutput = AVCaptureMovieFileOutput()

然后我在

configureSession()
中添加了输出(同样,位于添加照片输出的正下方),如下所示:

if session.canAddOutput(movieOutput) {
        session.addOutput(movieOutput)
    } else {
        print("Could not add movie output to the session")
        setupResult = .configurationFailed
        session.commitConfiguration()
        return
    }

在我的 iPhone (13 Pro) 上,这有效,我可以成功录制视频。但在我的 iPad Air 上,相机预览却变黑了。没有出现错误,并且

setupResult
打印为“成功”,但没有视频。有趣的是,如果我用快门按钮拍照,它会成功捕获(带有图像)到照片库。

我没有对项目进行其他修改,因此您可以通过下载演示应用程序并添加这些行来亲自尝试。

更新:我发现了一些有关

AVCaptureMovieFileOutput
如何与
AVCaptureVideoDataOutput
不兼容的相关帖子。但如果这是真的,为什么它可以在我的 iPhone 上运行?

swift avfoundation metal avcapturesession mtkview
1个回答
0
投票

我从来不明白为什么它能在我的 iPhone 上运行,但作为一种解决方法,我实现了 AVAssetWriter。这应该允许 AVCamFilter 在任何设备上录制视频(和音频!)。

首先,在顶部添加这些变量:

let audioSession = AVCaptureSession()
private let audioDataOutputQueue = DispatchQueue(label: "AudioDataQueue", qos: .userInitiated, attributes: [], autoreleaseFrequency: .workItem)
private let audioDataOutput = AVCaptureAudioDataOutput()

private var _assetWriter: AVAssetWriter?
private var _assetWriterVideoInput: AVAssetWriterInput?
private var _assetWriterAudioInput: AVAssetWriterInput?
private var _adpater: AVAssetWriterInputPixelBufferAdaptor?
private var _filename = ""
private var _time: Double = 0
private var _captureState = _CaptureState.idle

private enum _CaptureState {
    case idle, start, capturing, end
}

我将音频放在单独的会话中,因为我想包含触觉(请参阅下面的注释),但如果您愿意,您可以将它们放在同一个会话中。

configureSession()
中,最后一个
session.commitConfiguration()
之后,添加:

audioSession.beginConfiguration()

// Add an audio input
let audioDevice = AVCaptureDevice.default(for: .audio)
let audioInput = try! AVCaptureDeviceInput(device: audioDevice!)
if audioSession.canAddInput(audioInput) {
    audioSession.addInput(audioInput)
}

// Add an audio data output
if audioSession.canAddOutput(audioDataOutput) {
    audioSession.addOutput(audioDataOutput)
    audioDataOutput.setSampleBufferDelegate(self, queue: audioDataOutputQueue)
} else {
    print("Could not add audio data output to the session")
}

audioSession.commitConfiguration()

然后将

captureOutput(_ didOutput from:)
的内容替换为:

if connection.output == videoDataOutput {
            processVideo(sampleBuffer: sampleBuffer)
        }
        
        let timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer).seconds
            switch _captureState {
            case .start:
                // Set up recorder
                _filename = UUID().uuidString
                let videoPath = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!.appendingPathComponent("\(_filename).mov")
                let writer = try! AVAssetWriter(outputURL: videoPath, fileType: .mov)
                let settings = videoDataOutput.recommendedVideoSettingsForAssetWriter(writingTo: .mov)
                let input = AVAssetWriterInput(mediaType: .video, outputSettings: settings) // [AVVideoCodecKey: AVVideoCodecType.h264, AVVideoWidthKey: 1920, AVVideoHeightKey: 1080])
                input.mediaTimeScale = CMTimeScale(bitPattern: 600)
                input.expectsMediaDataInRealTime = true
                input.transform = CGAffineTransform(rotationAngle: .pi/2)
                let adapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: input, sourcePixelBufferAttributes: nil)
                if writer.canAdd(input) {
                    writer.add(input)
                }
                let audioSettings = audioDataOutput.recommendedAudioSettingsForAssetWriter(writingTo: .mov)
                let audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings)
                audioInput.expectsMediaDataInRealTime = true
                if writer.canAdd(audioInput) {
                    writer.add(audioInput)
                }
                writer.startWriting()
                writer.startSession(atSourceTime: .zero)
                _assetWriter = writer
                _assetWriterVideoInput = input
                _assetWriterAudioInput = audioInput
                _adpater = adapter
                _captureState = .capturing
                _time = timestamp
            case .capturing:
                if connection.output == videoDataOutput {
                        // Process video frames and write to videoInput
                    if _assetWriterVideoInput?.isReadyForMoreMediaData == true {
                        let time = CMTime(seconds: timestamp - _time, preferredTimescale: CMTimeScale(600))
                        _adpater?.append(previewView.pixelBuffer!, withPresentationTime: time)
                    }
                } else if connection.output == audioDataOutput {
                    // Process audio samples and write to audioInput
                    if _assetWriterAudioInput!.isReadyForMoreMediaData {
                        let time = CMTime(seconds: timestamp - _time, preferredTimescale: CMTimeScale(600))
                        if let modifiedBuffer = setPresentationTimestamp(sampleBuffer: sampleBuffer, presentationTimestamp: time) {
                            _assetWriterAudioInput!.append(modifiedBuffer)
                        }
                    }
                }
                break
            case .end:
                guard _assetWriterVideoInput?.isReadyForMoreMediaData == true, _assetWriter!.status != .failed else { break }
                let url = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!.appendingPathComponent("\(_filename).mov")
                _assetWriterVideoInput?.markAsFinished()
                _assetWriterAudioInput?.markAsFinished()
                _assetWriter?.finishWriting { [weak self] in
                    self?._captureState = .idle
                    self?._assetWriter = nil
                    self?._assetWriterVideoInput = nil
                    self?._assetWriterAudioInput = nil
                  UISaveVideoAtPathToSavedPhotosAlbum(url.path, nil, nil, nil)
                }
            default:
                break
            }

(你可以用URL做任何你想做的事,这里以保存到照片库为例)

最后,您需要它来同步音频和视频:

func setPresentationTimestamp(sampleBuffer: CMSampleBuffer, presentationTimestamp: CMTime) -> CMSampleBuffer? {
        var sampleBufferCopy: CMSampleBuffer? = nil
        var timingInfoArray = [CMSampleTimingInfo(duration: CMTimeMake(value: 1, timescale: 30), presentationTimeStamp: presentationTimestamp, decodeTimeStamp: CMTime.invalid)]

        var status = CMSampleBufferCreateCopyWithNewTiming(allocator: kCFAllocatorDefault, sampleBuffer: sampleBuffer, sampleTimingEntryCount: 1, sampleTimingArray: &timingInfoArray, sampleBufferOut: &sampleBufferCopy)

        if status == noErr {
            return sampleBufferCopy
        } else {
            // Handle the error
            return nil
        }
    }

要实际触发视频录制,请将

_captureState
更改为
.start
,然后更改为
.end
即可完成。

备注:

-您可能会在视频的开头和结尾看到黑帧。我尝试了多种方法来解决这个问题,但没有任何效果,所以我将

writer.startSession(atSourceTime: .zero)
更改为
writer.startSession(atSourceTime: CMTime(seconds: 0.25, preferredTimescale: CMTimeScale(600)))
,这只是将开始时间缩短了 0.25 秒。最后,在
_assetWriterVideoInput?.markAsFinished()
之前,添加此似乎可以修复它:

let endTime = timestamp - _time
_assetWriter?.endSession(atSourceTime: CMTime(seconds: endTime, preferredTimescale: CMTimeScale(600)))

-运行麦克风会禁用触觉。这就是我使用单独的音频会话的原因:这样我就可以告诉会话在视频录制开始时开始运行,并在结束时停止运行。只要不录音,您就应该能够使用触觉。请注意,您当然必须获得音频权限才能进行录音。

-一些 AVAssetWriter 代码请参见此处:https://gist.github.com/yusuke024/b5cd3909d9d7f9e919291491f6b381f0

-我绝不是专家!所以任何人都可以随意指出这里是否有不必要/错误的地方。

© www.soinside.com 2019 - 2024. All rights reserved.