I've been using the Vision object tracking code from the Apple example and others. This has been using swift and the programs work nicely.
I just ported the image recognizer to ObjectiveC with a few lines of code. But I get an error from the VNRequest handler:
error=Error Domain=com.apple.vis Code=8 "-[VNRequest internalPerformInContext:error:] has not been implemented"
My code is very simple taking a UIImage and looking for objects.
As I'm not processing video (yet) there wasn't a processor to initialize, but maybe there is something that needs to be initialized?
Any ideas?
I just ported the image recognizer to ObjectiveC with a few lines of code. But I get an error from the VNRequest handler:
error=Error Domain=com.apple.vis Code=8 "-[VNRequest internalPerformInContext:error:] has not been implemented"
My code is very simple taking a UIImage and looking for objects.
As I'm not processing video (yet) there wasn't a processor to initialize, but maybe there is something that needs to be initialized?
Any ideas?
Code Block -(void) processImageWithVision:(UIImage*) image { CIImage* ciImage = image.CIImage; if (!ciImage) ciImage = [[CIImage alloc] initWithCGImage:image.CGImage]; NSDictionary<VNImageOption,id>* dict = [[NSDictionary alloc] initWithObjectsAndKeys: @(0.8), @"confidenceThreshold", nil]; //Handler to process the image VNImageRequestHandler *imageRequestHandler = [[VNImageRequestHandler new] initWithCIImage:ciImage orientation:kCGImagePropertyOrientationUp options:dict]; // the handler that gets called VNRequest *vnRequest = [[VNRequest new]initWithCompletionHandler:^(VNRequest * _Nonnull request, NSError * _Nullable error) { NSLog(@"vnRequest called %@, %@, error=%@",request, request.results, error); }]; vnRequest.preferBackgroundProcessing = YES; //doesn't matter //vnRequest to pass to the perform requests. NSArray<VNRequest*>* vnRequests = [NSArray arrayWithObject:vnRequest]; //Schedules Vision requests to be performed. [imageRequestHandler performRequests:vnRequests error:nil]; }