我正在按照本教程在iOS 11上实现对象跟踪。我能够完美地跟踪对象,直到某个点,然后这个错误出现在控制台中。
Throws: Error Domain=com.apple.vis Code=9 "Internal error: Exceeded maximum allowed number of Trackers for a tracker type: VNObjectTrackerType" UserInfo={NSLocalizedDescription=Internal error: Exceeded maximum allowed of Trackers for a tracker type: VNObjectTrackerType}
我是否错误地使用了 API,或者 Vision 可能无法处理太多连续的对象跟踪任务?好奇是否有人知道为什么会发生这种情况。
您似乎达到了系统中可以激活的跟踪器数量的限制。首先要注意的是,每次新的观察都会创建一个新的跟踪器,并使用新的 -uuid 属性。您应该回收启动跟踪器时使用的初始观察,直到您不想再使用它为止,方法是将您从时间 T 的“结果”中获得的内容提供给您为时间 T+1 发出的后续请求。当您不想再使用该跟踪器时(可能置信度得分太低),可以设置一个“lastFrame”属性,它让 Vision 框架知道您已完成该跟踪器的使用。当序列请求处理程序被释放时,跟踪器也会被释放。
要跟踪矩形,您将后续观察结果提供给同一个 VNSequenceRequestHandler 实例,例如,
handler
。当矩形丢失时,即在您的处理函数/回调中新观察是nil
,或者您遇到其他一些跟踪错误,只需重新实例化handler
并继续,例如(展示想法的示例代码):
private var handler = VNSequenceRequestHandler()
// <...>
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
guard
let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer),
let lastObservation = self.lastObservation
else {
self.handler = VNSequenceRequestHandler()
return
}
let request = VNTrackObjectRequest(detectedObjectObservation: lastObservation, completionHandler: self.handleVisionRequestUpdate)
request.trackingLevel = .accurate
do {
try self.handler.perform([request], on: pixelBuffer)
} catch {
print("Throws: \(error)")
}
}
注意
handler
是var
,不是常数。
此外,如果新的观察对象无效,您可以在实际处理函数中重新实例化
handler
(如func handleVisionRequestUpdate(_ request: VNRequest, error: Error?)
)。
我的问题是我有一个函数调用 perform... 在同一个 VNSequenceRequestHandler 上,跟踪也在调用 perform,因为我同时处理了太多
try self.visionSequenceHandler.perform(trackRequests, on: ciimage)
。确保 VNSequenceRequestHandler 不会同时被多个执行击中....
为避免“超过允许的最大跟踪器数量”错误,您必须停止跟踪所有对象的跟踪器。 您必须将
isLastFrame
设置为true
并再次在request
上执行此handler
。
extension VNTrackingRequest {
func completeTracking(
with handler: VNSequenceRequestHandler,
on sampleBuffer: CMSampleBuffer
) {
isLastFrame = true
try? handler.perform([self], on: sampleBuffer)
}
}
这里是一个处理例子。如果没有
observation
,ObjectTracker
会尝试定位一个新对象。但是,如果已经检测到一个物体(observation
),它将跟踪它的位置。
class ObjectTracker: CameraViewControllerOutputDelegate {
var observation: VNDetectedObjectObservation?
private lazy var objectDetectRequest: VNCoreMLRequest = newObjectDetectRequest()
private var objectTrackingRequest: VNTrackObjectRequest?
private func newObjectDetectRequest() -> VNCoreMLRequest {
do {
let mlModel = try YourMLModel(configuration: .init()).model
let model = try VNCoreMLModel(for: mlModel)
let request = VNCoreMLRequest(model: model)
return request
} catch {
fatalError("Failed to load ML Model. Error: \(error)")
}
}
private func newObjectTrackingRequest(
for observation: VNDetectedObjectObservation
) -> VNTrackObjectRequest {
let request = VNTrackObjectRequest(detectedObjectObservation: observation)
request.trackingLevel = .fast
return request
}
func cameraViewController(
_ controller: CameraViewController,
didReceiveBuffer buffer: CMSampleBuffer,
orientation: CGImagePropertyOrientation
) {
if let oldObservation = observation {
objectTrackingRequest = objectTrackingRequest ?? newObjectTrackingRequest(for: oldObservation)
let visionHandler = VNSequenceRequestHandler()
try? visionHandler.perform([objectTrackingRequest!], on: buffer, orientation: orientation)
if let newObservation = objectTrackingRequest?.results?.first as? VNDetectedObjectObservation {
observation = newObservation
objectTrackingRequest?.inputObservation = newObservation
} else {
observation = nil
objectTrackingRequest?.completeTracking(with: visionHandler, on: buffer)
objectTrackingRequest = nil
}
} else {
objectDetectRequest = newObjectDetectRequest()
let visionHandler = VNImageRequestHandler(cmSampleBuffer: buffer, orientation: orientation, options: [:] )
try? visionHandler.perform([objectDetectRequest])
if let newObservation = objectDetectRequest.results?.first as? VNRecognizedObjectObservation {
observation = newObservation
}
}
}
}