将ARFrame#capturedImage转换为视图大小

问题描述 投票:1回答:1

[使用ARSessionDelegate处理ARKit中的原始相机图像时...

func session(_ session: ARSession, didUpdate frame: ARFrame) {

    guard let currentFrame = session.currentFrame else { return }
    let capturedImage = currentFrame.capturedImage

    debugPrint("Display size", UIScreen.main.bounds.size)
    debugPrint("Camera frame resolution", CVPixelBufferGetWidth(capturedImage), CVPixelBufferGetHeight(capturedImage))

    // ...

}

...据记录,相机图像数据与屏幕尺寸不匹配,例如,在iPhone X上,我得到:

  • 显示尺寸:375x812pt
  • 相机分辨率:1920x1440px

[现在有displayTransform(for:viewportSize:) API,可以将相机坐标转换为视图坐标。当使用这样的API时:

let ciimage = CIImage(cvImageBuffer: capturedImage)
let transform = currentFrame.displayTransform(for: .portrait, viewportSize: UIScreen.main.bounds.size)
var transformedImage = ciimage.transformed(by: transform)
debugPrint("Transformed size", transformedImage.extent.size)

我得到的大小为2340x1920,这似乎不正确,结果的宽高比应为375:812(〜0.46)。我在这里想念什么/使用此API将相机图像转换为“由ARSCNView显示”的图像的正确方法是什么?

(示例项目:ARKitCameraImage

ios arkit
1个回答
0
投票
这非常复杂,因为displayTransform(for:viewportSize)要求归一化的图像坐标,看来您只需要在纵向模式下翻转坐标,并且不仅需要变换图像而且还需要裁剪图像。以下代码对我有用。有关如何改善此问题的建议将不胜感激。

guard let frame = session.currentFrame else { return } let imageBuffer = frame.capturedImage let imageSize = CGSize(width: CVPixelBufferGetWidth(imageBuffer), height: CVPixelBufferGetHeight(imageBuffer)) let viewPort = sceneView.bounds let viewPortSize = sceneView.bounds.size let interfaceOrientation : UIInterfaceOrientation if #available(iOS 13.0, *) { interfaceOrientation = self.sceneView.window!.windowScene!.interfaceOrientation } else { interfaceOrientation = UIApplication.shared.statusBarOrientation } let image = CIImage(cvImageBuffer: imageBuffer) // The camera image doesn't match the view rotation and aspect ratio // Transform the image: // 1) Convert to "normalized image coordinates" let normalizeTransform = CGAffineTransform(scaleX: 1.0/imageSize.width, y: 1.0/imageSize.height) // 2) Flip the Y axis (for some mysterious reason this is only necessary in portrait mode) let flipTransform = (interfaceOrientation.isPortrait) ? CGAffineTransform(scaleX: -1, y: -1).translatedBy(x: -1, y: -1) : .identity // 3) Apply the transformation provided by ARFrame // This transformation converts: // - From Normalized image coordinates (Normalized image coordinates range from (0,0) in the upper left corner of the image to (1,1) in the lower right corner) // - To view coordinates ("a coordinate space appropriate for rendering the camera image onscreen") // See also: https://developer.apple.com/documentation/arkit/arframe/2923543-displaytransform let displayTransform = frame.displayTransform(for: interfaceOrientation, viewportSize: viewPortSize) // 4) Convert to view size let toViewPortTransform = CGAffineTransform(scaleX: viewPortSize.width, y: viewPortSize.height) // Transform the image and crop it to the viewport let transformedImage = image.transformed(by: normalizeTransform.concatenating(flipTransform).concatenating(displayTransform).concatenating(toViewPortTransform)).cropped(to: viewPort)

© www.soinside.com 2019 - 2024. All rights reserved.