如何在视频中定位CALayer?

问题描述 投票:0回答:1

我有一个UIView(大小:W:375 H:667),图像可以放在里面的任何地方。稍后此图像将与视频重叠并保存。我的问题是当我观看视频时,在我的UIView中选择的相同位置找不到图像,因为我的视频大小为(720 x 1280)。如何在视频(720 x 1280)内的UIView中反映所选图像的位置?这是我正在使用的代码:

private func watermark(video videoAsset:AVAsset,modelView:MyViewModel, watermarkText text : String!, imageName name : String!, saveToLibrary flag : Bool, watermarkPosition position : QUWatermarkPosition, completion : ((_ status : AVAssetExportSession.Status?, _ session: AVAssetExportSession?, _ outputURL : URL?) -> ())?) {

         DispatchQueue.global(qos: DispatchQoS.QoSClass.default).async {

            let mixComposition = AVMutableComposition()


            let compositionVideoTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
            let clipVideoTrack:AVAssetTrack = videoAsset.tracks(withMediaType: AVMediaType.video)[0]
            do {
                try compositionVideoTrack?.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: videoAsset.duration), of: clipVideoTrack, at: CMTime.zero)
            }
            catch {
                print(error.localizedDescription)
            }


            let videoSize = self.resolutionSizeForLocalVideo(asset: clipVideoTrack)
              print("DIMENSIONE DEL VIDEO W: \(videoSize.width) H: \(videoSize.height)")

            let parentLayer = CALayer()
            let videoLayer = CALayer()

            parentLayer.frame = CGRect(x: 0, y: 0, width: videoSize.width, height: videoSize.height)
            videoLayer.frame = CGRect(x: 0, y: 0, width: videoSize.width, height: videoSize.height)


            parentLayer.addSublayer(videoLayer)

             //My layer image
            let layerTest = CALayer()

            layerTest.frame = modelView.frame
            layerTest.contents = modelView.image.cgImage

            print("A: \(modelView.frame.origin.y)    -     \(modelView.frame.origin.x)")
            print("B: \(layerTest.frame.origin.y)     -     \(layerTest.frame.origin.x)")
            parentLayer.addSublayer(layerTest)

           print("PARENT: \(parentLayer.frame.origin.y)    -     \(parentLayer.frame.origin.x)")
            //------------------------

            let videoComp = AVMutableVideoComposition()
            videoComp.renderSize = videoSize
            videoComp.frameDuration = CMTimeMake(value: 1, timescale: 30)
            videoComp.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: parentLayer)

            let instruction = AVMutableVideoCompositionInstruction()

            instruction.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: mixComposition.duration)

              let layerInstruction = self.videoCompositionInstructionForTrack(track: compositionVideoTrack!, asset: videoAsset)
            layerInstruction.setTransform((clipVideoTrack.preferredTransform), at: CMTime.zero)

            instruction.layerInstructions = [layerInstruction]
            videoComp.instructions = [instruction]

            let documentDirectory = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]
            let dateFormatter = DateFormatter()
            dateFormatter.dateStyle = .long
            dateFormatter.timeStyle = .short
            let date = dateFormatter.string(from: Date())
            let url = URL(fileURLWithPath: documentDirectory).appendingPathComponent("watermarkVideo-\(date).mp4")

            let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)
            exporter?.outputURL = url
            exporter?.outputFileType = AVFileType.mp4
            exporter?.shouldOptimizeForNetworkUse = true
            exporter?.videoComposition = videoComp

            exporter?.exportAsynchronously() {
                DispatchQueue.main.async {

                    if exporter?.status == AVAssetExportSession.Status.completed {
                        let outputURL = exporter?.outputURL
                        if flag {
                            // Save to library
                            //                            let library = ALAssetsLibrary()

                            if UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(outputURL!.path) {
                                PHPhotoLibrary.shared().performChanges({
                                    PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: outputURL!)
                                }) { saved, error in
                                    if saved {
                                        completion!(AVAssetExportSession.Status.completed, exporter, outputURL)
                                    }
                                }
                            }

                            //                            if library.videoAtPathIs(compatibleWithSavedPhotosAlbum: outputURL) {
                            //                                library.writeVideoAtPathToSavedPhotosAlbum(outputURL,
                            //                                                                           completionBlock: { (assetURL:NSURL!, error:NSError!) -> Void in
                            //
                            //                                                                            completion!(AVAssetExportSessionStatus.Completed, exporter, outputURL)
                            //                                })
                            //                            }
                        } else {
                            completion!(AVAssetExportSession.Status.completed, exporter, outputURL)
                        }

                    } else {
                        // Error
                        completion!(exporter?.status, exporter, nil)
                    }
                }
            }
        }

    }


    private func videoCompositionInstructionForTrack(track: AVCompositionTrack, asset: AVAsset) -> AVMutableVideoCompositionLayerInstruction {


        let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track)
        let assetTrack = asset.tracks(withMediaType: AVMediaType.video)[0]
        let scale : CGAffineTransform = CGAffineTransform(scaleX: 1, y:1)
        instruction.setTransform(assetTrack.preferredTransform.concatenating(scale), at: CMTime.zero)
        return instruction
    }

这就是我想得到的:enter image description here

ios swift avfoundation
1个回答
0
投票

this question的答案可能会有所帮助。尝试将用户生成的文本放在视频上时,我遇到了类似的问题。这对我有用:

首先,我添加了一个帮助方法,将CGPoint从一个矩形转换为另一个:

func convertPoint(point: CGPoint, fromRect: CGRect, toRect: CGRect) -> CGPoint {
    return CGPoint(x: (toRect.size.width / fromRect.size.width) * point.x, y: (toRect.size.height / fromRect.size.height) * point.y)
}

我使用其中心点定位了我的文本视图(在您的情况下,是一个图像视图)。以下是使用辅助方法计算调整后的中心点的方法:

let adjustedCenter = convertPoint(point: imageView.center, fromRect: view.frame, toRect: CGRect(x: 0, y: 0, width: 720.0, height: 1280.0))

之后我不得不做一些额外的定位因为CALayers的坐标系被翻转了,所以这就是最后一点的样子:

let finalCenter = CGPoint(x: adjustedCenter.x, y: (1280.0 - adjustedCenter.y) - (imageView.bounds.height / 2.0))

然后,您将CALayer的位置属性设置为该点。

layerTest.position = finalCenter

希望有所帮助!

© www.soinside.com 2019 - 2024. All rights reserved.