我正在尝试从 macOS (Ventura) 上的 Vision.framework 进行头部跟踪期间获取头部姿势信息。我能够在第一帧上获取它,但不确定随后如何获取它。我正在创建这样的面部标志检测请求:
detectLandmarksRequest = [[VNDetectFaceLandmarksRequest alloc] initWithCompletionHandler:^(VNRequest * _Nonnull request, NSError * _Nullable error) {
for (VNFaceObservation *observation in request.results) {
VNTrackObjectRequest *nextTrackRequest = [[VNTrackObjectRequest alloc] initWithDetectedObjectObservation:observation];
[_faceTrackingRequests addObject:nextTrackRequest];
}
}];
detectLandmarksRequest.constellation = VNRequestFaceLandmarksConstellation76Points;
detectLandmarksRequest.revision = VNDetectFaceLandmarksRequestRevision3;
然后我使用图像请求处理程序检测面部:
VNImageRequestHandler *detectionHandler = [[VNImageRequestHandler alloc] initWithCVPixelBuffer:pixelBuffer
options:@{}];
NSError *detectionError = nil;
NSArray *landmarkRequest = @[detectLandmarksRequest];
if (![detectionHandler performRequests:landmarkRequest
error:&detectionError]) {
// Handle errors
}
[detectionHandler release];
当调用完成处理程序时,它包含滚动和偏航,但不包含俯仰,尽管将请求修订设置为
VNDetectFaceLandmarksRequestRevision3
。
因为我正在进行面部跟踪,所以我不会创建任何进一步的地标请求。我遵循与 Apple 开发者网站上的 VisionFaceTrack 示例相同的格式。在上面的完成块中,我获取面部检测的结果并从中创建对象跟踪请求。然后,我调用序列处理程序来跟踪后续视频帧中的面部。 (我正在处理磁盘上的视频,而不是来自相机的视频,fwiw。)
NSError *trackingError = nil;
if (![sequenceHandler performRequests:_faceTrackingRequests
onCVPixelBuffer:pixelBuffer
error:&trackingError]) {
// Handle errors
}
然后,我获取序列处理程序的结果并提取检测到的对象,并发出一组新的请求来跟踪检测到的对象等。
NSMutableArray<VNTrackObjectRequest*> *newRequests = [NSMutableArray array];
for (VNTrackObjectRequest *nextTrackingRequest in _faceTrackingRequests) {
for (VNDetectedObjectObservation *observation in nextTrackingRequest.results) {
if (observation != nil) {
nextTrackingRequest.inputObservation = observation;
[newRequests addObject:nextTrackingRequest];
}
}
}
[_faceTrackingRequests release];
_faceTrackingRequests = [newRequests retain];
我取回了所有面部标志,但在随后的任何时刻我都没有得到任何包含头部的滚动、俯仰和偏航的结果。事实上,当我跟踪面部标志时,它期望我没有的滚动、俯仰和偏航:
NSMutableArray<VNFaceObservation*> *observations = [NSMutableArray arrayWithCapacity:nextFaceTrackingRequest.results.count];
for (VNDetectedObjectObservation *nextObservation in nextFaceTrackingRequest.results) {
VNFaceObservation *faceObservation = [VNFaceObservation faceObservationWithRequestRevision:VNDetectFaceLandmarksRequestRevision3
boundingBox:nextObservation.boundingBox
roll:nil
yaw:nil
pitch:nil];
[observations addObject:faceObservation];
}
在第一次调用检测面部后,如何从 Vision.framework 获取横滚、俯仰和偏航?为什么当我请求更新的结果修订版时,我只在该调用中获得横滚和偏航?
据我所知,面部姿势不是按顺序跟踪的。为了在每一帧上获取面部姿势,除了您正在做的其他工作之外,您还必须在每一帧上创建一个新的面部矩形检测请求。在每一帧上调用这个函数可以让我获得横滚、俯仰和偏航:
NSError *detectionError = nil;
NSMutableArray<NSDictionary*> *poses = [NSMutableArray array];
VNDetectFaceRectanglesRequest *facePoseRequest = [[VNDetectFaceRectanglesRequest alloc] initWithCompletionHandler:^(VNRequest * _Nonnull request, NSError * _Nullable error) {
for (VNFaceObservation *nextObservation in request.results) {
NSNumber *roll = nextObservation.roll;
NSNumber *pitch = nextObservation.pitch;
NSNumber *yaw = nextObservation.yaw;
NSDictionary *pose = @{
kKey_Roll : roll,
kKey_Pitch : pitch,
kKey_Yaw : yaw
};
[poses addObject:pose];
}
}];
NSArray *landmarkRequest = @[facePoseRequest];
if (![detectionHandler performRequests:landmarkRequest
error:&detectionError]) {
NSLog(@"Unable to get face pose on frame %ld", frameNumber);
}
[detectionHandler release];