Arkit and Core ML (Vision)

I want to create an App that merges ArKit and Core ML together, from my understanding the ARKit is captured by using the AVFoundation so it should possible to use the same source to feed the visionRequests.

But I'm not sure where to start.

Replies

We are very interested aswell. We tried to have both working on the same project but when you run ARSession, the AVCaptureSession stops automatically. I would be great to know how to merge both functionalities

We here at the Tyrell Corporation are very interested in that as well. Have some things going on Off-World that we could use this technology for.

You can create an abstraction class to do this. Since the method:


- (void)session:(ARSession *)session didUpdateFrame:(ARFrame *)frame;


also has a pixelBuffer at frame.capturedImage, you can use the standard approach to handling pixel buffers with kCVPixelFormatType_420YpCbCr8BiPlanarFullRange data.


In the case of an iPhone with ios11 and arkit capabilities, just setup the ar session instead. We have a pretty processor intense system so we chose to drop frames when we were busy, hence the DISPATCH_QUEUE_CONCURRENT, instead of sync. I suggest moving off the main thread like this or with a synchronous queue. Set an attomic boolean property to keep track if you're busy processing the last frame if you do asynchronous like this:


self.arSession = [ARSession new];

dispatch_queue_attr_t highPriorityAttr = dispatch_queue_attr_make_with_qos_class (DISPATCH_QUEUE_CONCURRENT, QOS_CLASS_USER_INITIATED,-1);

self.arSession.delegateQueue = dispatch_queue_create ("com.cambrian.ar_queue",highPriorityAttr);

self.arSession.delegate = self;