如何通过Apple Vision“ NO AR”在检测到的面部上应用3D模型

问题描述 投票:0回答:1

使用iPhoneX True-Depth相机可以获取任何物体的3D坐标,并使用该信息来定位和缩放物体,但是对于较旧的iPhone,我们无法使用正面相机上的AR,到目前为止,我们所做的工作是使用Apple Vison框架检测面部,并在面部或地标周围绘制一些2D路径。我制作了一个[[SceneView并将其应用为背景清晰的我的视图的顶层,在它的下面是AVCaptureVideoPreviewLayer,在检测到人脸之后,我的3D对象出现在屏幕上,但是正确地对其进行了定位和缩放根据FaceboundingBox需要的非投影对象以及其他卡在其中的东西,我也尝试使用CATransform3D将2D BoundingBox转换为3D,但我失败了!我想知道我是否想要实现?我记得如果我没记错的话,SnapChat在ARKit在iPhone上可用之前就已经这样做了!

<< img src =“ https://image.soinside.com/eyJ1cmwiOiAiaHR0cHM6Ly9pLmltZ3VyLmNvbS8zNmJVSGlCLmdpZiJ9” alt =“ Imgur”>

override func viewDidLoad() { super.viewDidLoad() self.view.addSubview(self.sceneView) self.sceneView.frame = self.view.bounds self.sceneView.backgroundColor = .clear self.node = self.scene.rootNode.childNode(withName: "face", recursively: true)! } fileprivate func updateFaceView(for result: VNFaceObservation, twoDFace: Face2D) { let box = convert(rect: result.boundingBox) defer { DispatchQueue.main.async { self.faceView.setNeedsDisplay() } } faceView.boundingBox = box self.sceneView.scene?.rootNode.addChildNode(self.node) let unprojectedBox = SCNVector3(box.origin.x, box.origin.y, 0.8) let worldPoint = sceneView.unprojectPoint(unprojectedBox) self.node.position = worldPoint /* Here i have to to unprojecting to convert the value from a 2D point to 3D point also issue here. */ }

swift xcode 3d face-detection apple-vision
1个回答
© www.soinside.com 2019 - 2024. All rights reserved.