在 AVFoundation 中合并时如何为视频设置单独的转换?

问题描述 投票:0回答:1

我希望使用 AVFoundation 在 Swift 中将多个视频(全部来自不同来源)合并在一起。生成的视频应为纵向格式。

我编写的功能将视频合并为一个视频。但是,从手机(例如 iPhone)拍摄的视频似乎以横向导出,而其余视频则以纵向导出。然后,横向视频将向上拉伸以适应纵向纵横比。看来 iPhone 将视频保存为横向(即使是纵向),然后系统使用元数据将其显示为纵向。

为了解决这个问题,我尝试检测视频是否为横向(或其他旋转),然后手动将其转换为纵向。然而,当我这样做时,似乎变换应用于整个轨道,这导致整个合成以横向渲染,其中一些视频以横向渲染,其他视频以纵向渲染。我不知道如何将转换仅应用于单个视频。我尝试使用多个轨道,但随后仅显示一个视频,其余轨道将被忽略。这是导出视频的示例(它的渲染方式是这样的,它应该渲染为 9:16,但经过转换后它渲染为 16:9,请注意第二个剪辑虽然最初是以纵向录制的,但它是扭曲的)。

这是我的代码:

private static func mergeVideos(
    videoPaths: [URL],
    outputURL: URL,
    handler: @escaping (_ path: URL)-> Void
  ) {
    let videoComposition = AVMutableComposition()
    var lastTime: CMTime = .zero
    
    guard let videoCompositionTrack = videoComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }
    
    for path in videoPaths {
      let assetVideo = AVAsset(url: path)
      
      getTracks(assetVideo, .video) { videoTracks in
        // Add video track
        do {
          try videoCompositionTrack.insertTimeRange(CMTimeRangeMake(start: .zero, duration: assetVideo.duration), of: videoTracks[0], at: lastTime)
          
          // Apply the original transform
          if let assetVideoTrack = assetVideo.tracks(withMediaType: AVMediaType.video).last {
            let t = assetVideoTrack.preferredTransform
            let size = assetVideoTrack.naturalSize
            
            let videoAssetOrientation: CGImagePropertyOrientation

            if size.width == t.tx && size.height == t.ty {
              print("down")
              
              videoAssetOrientation = .down
              videoCompositionTrack.preferredTransform = CGAffineTransform(rotationAngle: .pi) // 180 degrees
            } else if t.tx == 0 && t.ty == 0 {
              print("up")
              
              videoCompositionTrack.preferredTransform = assetVideoTrack.preferredTransform
              videoAssetOrientation = .up
            } else if t.tx == 0 && t.ty == size.width {
              print("left")
              
              videoAssetOrientation = .left
              videoCompositionTrack.preferredTransform = CGAffineTransform(rotationAngle: .pi / 2) // 90 degrees to the right

            } else {
              print("right")
              
              videoAssetOrientation = .right
              videoCompositionTrack.preferredTransform = CGAffineTransform(rotationAngle: -.pi / 2) // 90 degrees to the left
            }
          }
          
        } catch {
          print("Failed to insert video track")
          return
        }
        
        self.getTracks(assetVideo, .audio) { audioTracks in
          // Add audio track only if it exists
          if !audioTracks.isEmpty {
            do {
              try videoCompositionTrack.insertTimeRange(CMTimeRangeMake(start: .zero, duration: assetVideo.duration), of: audioTracks[0], at: lastTime)
            } catch {
              print("Failed to insert audio track")
              return
            }
          }
          
          // Update time
          lastTime = CMTimeAdd(lastTime, assetVideo.duration)
        }
      }
    }
        
    guard let exporter = AVAssetExportSession(asset: videoComposition, presetName: AVAssetExportPresetHighestQuality) else { return }
    exporter.outputURL = outputURL
    exporter.outputFileType = AVFileType.mp4
    exporter.shouldOptimizeForNetworkUse = true
    exporter.exportAsynchronously(completionHandler: {
      switch exporter.status {
      case .failed:
        print("Export failed \(exporter.error!)")
      case .completed:
        print("completed export")
        handler(outputURL)
      default:
        break
      }
    })
  }

有人知道我在这里缺少什么吗?非常感谢任何帮助。

ios swift video avfoundation
1个回答
0
投票

转换为videoCompositionTrack影响整个轨道。你可以使用AVVideoComposition来做,它使用AVVideoCompositionInstruction来定时进行视频处理。

这里是没有不重要部分的代码,并将 videoComposition 重命名为 mainCompositon 以避免混淆:

private static func mergeVideos(
    videoPaths: [URL],
    outputURL: URL,
    handler: @escaping (_ path: URL)-> Void
) {
    let mainComposition = AVMutableComposition()
    var lastTime: CMTime = .zero
    
    guard let videoCompositionTrack = mainComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }
    let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoCompositionTrack)
    
    for path in videoPaths {
      let assetVideo = AVAsset(url: path)
      
      getTracks(assetVideo, .video) { videoTracks in
        // Add video track
        do {
          try videoCompositionTrack.insertTimeRange(CMTimeRangeMake(start: .zero, duration: assetVideo.duration), of: videoTracks[0], at: lastTime)
            
          // Apply the original transform
          if let assetVideoTrack = assetVideo.tracks(withMediaType: AVMediaType.video).last {
              let t = assetVideoTrack.preferredTransform
              layerInstruction.setTransform(t, at: lastTime) // apply transfrom to track at time.
          }
          
        } catch {
          print("Failed to insert video track")
          return
        }
        
        // deal with audio part ...
      }
    }
    
    let videoCompostion = AVMutableVideoComposition()
    let instruction = AVMutableVideoCompositionInstruction()
    instruction.timeRange = CMTimeRange(start: .zero, end: lastTime)
    videoCompostion.instructions = [instruction]
    
    guard let exporter = AVAssetExportSession(asset: mainComposition, presetName: AVAssetExportPresetHighestQuality) else { return }
    
    // assign videoComposition to exporter
    exporter.videoComposition = videoCompostion
    
    // other export part ...
}

PS。你最好添加

getTracks(_:, _:)
方法来完成代码。

© www.soinside.com 2019 - 2024. All rights reserved.