通过 AVMutableVideoComposition 覆盖静态文本

问题描述 投票:0回答:1

在 iOS 版 Swift 中,我有一个

AVURLAsset
数组。我通过一个函数将其传递,将视频资源拼接/合并到一个最终视频中。对于每个视频,我的目标是覆盖框架中居中的文本。

当我播放输出的视频时,视频资源正确合并,但我无法理解为什么没有文本叠加。我尝试遵循“现有答案”,但无济于事。任何指导将不胜感激.. func merge(videos: [AVURLAsset], completion: @escaping (_ url: URL, _ asset: AVAssetExportSession)->()) { let videoComposition = AVMutableComposition() var lastTime: CMTime = .zero var count = 0 var maxVideoSize = CGSize.zero // For determining the maximum video size guard let videoCompositionTrack = videoComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return } guard let audioCompositionTrack = videoComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return } let mainComposition = AVMutableVideoComposition() var parentLayers = [CALayer]() // To hold all individual parent layers for video in videos { if let videoTrack = video.tracks(withMediaType: .video)[safe: 0] { videoCompositionTrack.preferredTransform = videoTrack.preferredTransform do { try videoCompositionTrack.insertTimeRange(CMTimeRangeMake(start: .zero, duration: video.duration), of: videoTrack, at: lastTime) if let audioTrack = video.tracks(withMediaType: .audio)[safe: 0] { try audioCompositionTrack.insertTimeRange(CMTimeRangeMake(start: .zero, duration: video.duration), of: audioTrack, at: lastTime) } lastTime = CMTimeAdd(lastTime, video.duration) // Obtain video dimensions and update max size if necessary let videoSize = videoTrack.naturalSize.applying(videoTrack.preferredTransform) let videoRect = CGRect(x: 0, y: 0, width: abs(videoSize.width), height: abs(videoSize.height)) if videoRect.width > maxVideoSize.width { maxVideoSize.width = videoRect.width } if videoRect.height > maxVideoSize.height { maxVideoSize.height = videoRect.height } // Create and configure the text layer for this segment let textLayer = CATextLayer() textLayer.string = "TESTING" textLayer.foregroundColor = UIColor.white.cgColor textLayer.backgroundColor = UIColor.clear.cgColor textLayer.fontSize = 100 textLayer.shadowOpacity = 0.5 textLayer.alignmentMode = .center textLayer.contentsScale = UIScreen.main.scale // Ensures text is sharp textLayer.isWrapped = true // Allows text wrapping if needed // Calculate frame for centrally aligned text let textHeight: CGFloat = 120 // Adjust as needed let textWidth: CGFloat = videoRect.width // Padding from edges let xPos = (videoRect.width - textWidth) / 2 let yPos = (videoRect.height - textHeight) / 2 textLayer.frame = CGRect(x: xPos, y: yPos, width: textWidth, height: textHeight) print(textLayer.frame) // Create a parent layer for video and text let parentLayer = CALayer() let videoLayer = CALayer() parentLayer.frame = videoRect videoLayer.frame = videoRect textLayer.zPosition = 1 // Ensuring text layer is on top parentLayer.addSublayer(videoLayer) parentLayer.addSublayer(textLayer) parentLayers.append(parentLayer) // Add to array // Add parent layer to video composition let videoCompositionInstruction = AVMutableVideoCompositionInstruction() videoCompositionInstruction.timeRange = CMTimeRangeMake(start: .zero, duration: video.duration) let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack) videoCompositionInstruction.layerInstructions = [layerInstruction] mainComposition.instructions.append(videoCompositionInstruction) count += 1 } catch { print("Failed to insert track") return } } } let mainParentLayer = CALayer() mainParentLayer.frame = CGRect(x: 0, y: 0, width: maxVideoSize.width, height: maxVideoSize.height) for layer in parentLayers { mainParentLayer.addSublayer(layer) } // Set the renderSize and frameDuration of the mainComposition mainComposition.renderSize = maxVideoSize mainComposition.frameDuration = CMTime(value: 1, timescale: 30) // Assuming 30 fps mainComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: mainParentLayer, in: mainParentLayer) let outputUrl = NSURL.fileURL(withPath: NSTemporaryDirectory() + "mergedVid" + ".mp4") guard let exporter = AVAssetExportSession(asset: videoComposition, presetName: AVAssetExportPresetHighestQuality) else { return } exporter.videoComposition = mainComposition exporter.outputURL = outputUrl exporter.outputFileType = .mp4 exporter.shouldOptimizeForNetworkUse = true exporter.exportAsynchronously { DispatchQueue.main.async { if let outputUrl = exporter.outputURL, exporter.status == .completed { completion(outputUrl, exporter) } else if let error = exporter.error { print("Export failed: \(error.localizedDescription)") } } } play(video: exporter.asset) }


ios swift avfoundation avmutablecomposition avmutablevideocomposition
1个回答
0
投票

AVVideoCompositionCoreAnimationTool

 未正确配置为在视频上合成这些层。
您应该创建一个主视频图层 (

mainVideoLayer

) 并将其添加到主父图层 (

mainParentLayer
)。然后,将每个视频的文本图层添加到
mainParentLayer
。这确保了适当的层次结构,其中文本叠加正确定位在视频内容上。
并将 
AVVideoCompositionCoreAnimationTool
配置为要处理的视频层,将
mainVideoLayer
作为动画层。
确保每个
mainParentLayer

正确跨越其相应视频片段的整个持续时间。这对于无缝播放和文本的正确叠加应该很重要。

您的
AVMutableVideoCompositionInstruction功能将是(仅限相关摘录):

merge

    

© www.soinside.com 2019 - 2024. All rights reserved.