结合CoreML对象检测和ARKit 2D图像检测

问题描述 投票:0回答:1

该应用程序检测特定的2D图像(使用ARKit),并具有可检测某些家具的mlmodel,mlmodel的类型为Object Detection,它经过训练并可以工作。根据检测到的内容,我需要向场景或其他场景中添加一些3D对象。

[我用ARWorldTrackingConfiguration创建了一个AR会话,我可以检测到2D图像,并在方法renderer(_:didAdd:for :)中添加3D对象,它可以正常工作:

    override func viewDidAppear(_ animated: Bool) {
    super.viewWillAppear(animated)

    guard let referenceImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: nil) else {
        fatalError("Missing expected asset catalog resources.")
    }

    let configuration = ARWorldTrackingConfiguration()
    configuration.worldAlignment = .gravityAndHeading
    configuration.detectionImages = referenceImages
    configuration.maximumNumberOfTrackedImages = 1
    configuration.isAutoFocusEnabled = false
    sceneView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
}

此外,我设置了mlmodel:

override func viewDidLoad() {

    super.viewDidLoad()
    sceneView.delegate = self
    sceneView.session.delegate = self
    setupML()
}

    internal func setupML() {

    guard let modelPath = Bundle.main.url(forResource: "furnituresDetector", withExtension: "mlmodelc") else {
        fatalError("Missing model")
    }

    do {
        let coreMLModel = try VNCoreMLModel(for: MLModel(contentsOf: modelPath))
        let request = VNCoreMLRequest(model: coreMLModel) { [weak self] (request, error) in
            DispatchQueue.main.async {
                if let results = request.results {
                    print(results.count)
                }
            }
        }
        self.requests = [request]
    } catch {
        print("Core ML Model error")
    }
}

[现在,我只想打印结果数以查看ml模型是否检测到某些东西。

直到这里一切运行正常,我运行该应用程序,并且照相机显示流畅。我不再使用新的摄影机会话,而是重用了在Combining CoreML and ARKit

中找到的由ARSCNView启动的会话

所以我的解决方案是使用session(_:didUpdate :)向coreml模型发出请求,并不断知道该模型是否检测到相机中出现的东西。

    func session(_ session: ARSession, didUpdate frame: ARFrame) {

    DispatchQueue(label: "CoreML_request").async {
        guard let pixelBuffer = session.currentFrame?.capturedImage else {
            return
        }

        let exifOrientation = self.exifOrientationFromDeviceOrientation()


        let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: exifOrientation, options: [:])
        do {
            try imageRequestHandler.perform(self.requests)
        } catch {
            print(error)
        }
    }
}

如果我运行该应用程序,则可以正常运行,但问题是摄像头看起来非常慢,如果我删除会话(_:didUpdate :)中的代码,则摄像头看起来又正常了。因此问题就在这里,我想发生的事情是它不是发出此请求的适当位置,因为在检测到相机中的新帧时始终会调用此方法。但是我不知道在哪里或该做什么。你有什么主意吗?

如果找到解决方案,我将对其进行更新。谢谢!

swift scenekit arkit coreml
1个回答
0
投票

我找到了解决方案。问题在于相机可用的缓冲区有限,而另一个Vision任务仍在运行时,我排队的缓冲区过多。

这就是相机速度慢的原因。因此,解决方案是在执行另一个请求之前释放缓冲区。

internal var currentBuffer: CVPixelBuffer?

func session(_ session: ARSession, didUpdate frame: ARFrame) {

    guard currentBuffer == nil, case .normal = frame.camera.trackingState else {
        return
    }
    self.currentBuffer = frame.capturedImage

    DispatchQueue(label: "CoreML_request").async {
        guard let pixelBuffer = session.currentFrame?.capturedImage else {
            return
        }

        let exifOrientation = self.exifOrientationFromDeviceOrientation()

        let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: exifOrientation, options: [:])
        do {
            // Release the pixel buffer when done, allowing the next buffer to be processed.
            defer { self.currentBuffer = nil }
            try imageRequestHandler.perform(self.requests)
        } catch {
            print(error)
        }
    }
}

这里您可以查看文档:

https://developer.apple.com/documentation/arkit/recognizing_and_labeling_arbitrary_objects

© www.soinside.com 2019 - 2024. All rights reserved.