使用metalkit导出拼贴视频

问题描述 投票:0回答:1

如何使用不同分辨率的视频导出拼贴视频?我正在尝试实现像在下面显示第一张图片一样的效果,并且正在使用AVCustomEdit演示,并且到目前为止,我创建了AVMutableVideoComposition,将所有视频trackID传递到customVideoCompositorClass,并获取了所有视频CVPixelBuffer,然后进行转换MTLTexture比渲染所有纹理要好,但是问题是我的视频输出大小是正方形(destinationTexture),视频大小是纵向还是横向,这就是为什么每个视频都受到挤压的原因,我又如何旋转每个视频的缩放位置和蒙版形状?我也该如何申请cifilters?我应该将每个CVPixelBuffer转换为ciImage,然后将ciImage转换回CVPixelBuffer吗?

video collage issue image

override func renderPixelBuffer(backgroundTexture: MTLTexture,
                                firstPixelBuffer: CVPixelBuffer,
                                secondPixelBuffer: CVPixelBuffer,
                                thirdPixelBuffer: CVPixelBuffer,
                                fourthPixelBuffer: CVPixelBuffer,
                                destinationPixelBuffer: CVPixelBuffer) {

    // Create a MTLTexture from the CVPixelBuffer.
    guard let firstTexture = buildTextureForPixelBuffer(firstPixelBuffer) else { return }
    guard let secondTexture = buildTextureForPixelBuffer(secondPixelBuffer) else { return }
    guard let thirdTexture = buildTextureForPixelBuffer(thirdPixelBuffer) else { return }
    guard let fourthTexture = buildTextureForPixelBuffer(fourthPixelBuffer) else { return }
    guard let destinationTexture = buildTextureForPixelBuffer(destinationPixelBuffer) else { return }

    /*
     We must maintain a reference to the pixel buffer until the Metal rendering is complete. This is because the
     'buildTextureForPixelBuffer' function above uses CVMetalTextureCacheCreateTextureFromImage to create a
     Metal texture (CVMetalTexture) from the IOSurface that backs the CVPixelBuffer, but
     CVMetalTextureCacheCreateTextureFromImage doesn't increment the use count of the IOSurface; only the
     CVPixelBuffer, and the CVMTLTexture own this IOSurface. Therefore we must maintain a reference to either
     the pixel buffer or Metal texture until the Metal rendering is done. The MTLCommandBuffer completion
     handler below is then used to release these references.
     */

    pixelBuffers = RenderPixelBuffers(firstBuffer: firstPixelBuffer,
                                      secondBuffer: secondPixelBuffer,
                                      thirdBuffer: thirdPixelBuffer,
                                      fourthBuffer: fourthPixelBuffer,
                                      destinationBuffer: destinationPixelBuffer)

    // Create a new command buffer for each renderpass to the current drawable.
    let commandBuffer = commandQueue.makeCommandBuffer()!
    commandBuffer.label = "MyCommand"

    /*
     Obtain a drawable texture for this render pass and set up the renderpass
     descriptor for the command encoder to render into.
     */
    let renderPassDescriptor = setupRenderPassDescriptorForTexture(destinationTexture)

    // Create a render command encoder so we can render into something.
    let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor)!
    renderEncoder.label = "MyRenderEncoder"

    guard let renderPipelineState = renderPipelineState else { return }

    modelConstants.modelViewMatrix = matrix_identity_float4x4

    // Render background texture.
    renderTexture(renderEncoder, texture: backgroundTexture, pipelineState: renderPipelineState)

    var translationMatrix = matrix_float4x4(translation: simd_float3(-0.5, 0.5, 0))
    // var rotationMatrix = matrix_float4x4(rotationZ: radians(fromDegrees: -90))
    var scaleMatrix = matrix_float4x4(scaling: 0.25)
    var modelMatrix = translationMatrix * scaleMatrix
    modelConstants.modelViewMatrix = modelMatrix

    // Render first texture.
    renderTexture(renderEncoder, texture: firstTexture, pipelineState: renderPipelineState)

    //        translationMatrix = matrix_float4x4(translation: simd_float3(0.5, -0.5, 0))
    //        rotationMatrix = matrix_float4x4(rotationZ: radians(fromDegrees: -45))
    //        scaleMatrix = matrix_float4x4(scaling: 0.5)
    //        modelMatrix = translationMatrix * scaleMatrix * rotationMatrix
    //        modelConstants.modelViewMatrix = modelMatrix

    //        // Render second texture.
    //        renderTexture(renderEncoder, texture: secondTexture, pipelineState: renderPipelineState)
    //
    //        // Render third texture.
    //        renderTexture(renderEncoder, texture: thirdTexture, pipelineState: renderPipelineState)
    //
    //        // Render fourth texture.
    //        renderTexture(renderEncoder, texture: fourthTexture, pipelineState: renderPipelineState)

    // We're done encoding commands.
    renderEncoder.endEncoding()

    // Use the command buffer completion block to release the reference to the pixel buffers.
    commandBuffer.addCompletedHandler({ _ in
        self.pixelBuffers = nil // Release the reference to the pixel buffers.
    })

    // Finalize rendering here & push the command buffer to the GPU.
    commandBuffer.commit()
}
ios swift avfoundation metal metalkit
1个回答
0
投票

我建议使用一个名为MetalPetal的库。这是一个基于Metal的图像处理框架。您必须将CVPixelBuffer转换为MetalImage即MTIImage。然后您可以在图像中执行任何操作,例如使用预制滤镜,可以对其应用,或者甚至可以使用CIFilter或自定义滤镜,并且可以变换,旋转,裁剪每个帧,以便拼贴帧准确。然后您必须再次将MTIimage转换为cvpixelbuffer。在这里您还可以使用CIImage,但我想它会很慢。并且您可能会获得渲染大小的盒子图像。请查看渲染大小。

© www.soinside.com 2019 - 2024. All rights reserved.