Possible to access AVCaptureSession?

Is it possible to access AVCaptureSession in a ARSession or ARSCNView?

Replies

I would also like to find out if this is possible. Anyone know?

Sorry, no it's not.

Just to know more, what did you want to do with it?

Check out https://github.com/biscuitehh/spiderpig. You couldn't submit the resulting app to the app store due to the use of private ARKit headers, but you could still make some cool tech demos until Apple opens up this API more.

Feed it into a Machine Learning model

What if we have existing camera app using AVCaptureSession and would like to use ARKit to show the AR object on top of the preview, we might want to get the capture session from ARSCNView and replace the original one. Or we have to rewrite a lot of codes because of using ARSCNView which we didn't use before. Any suggesstion?

Is there a way have running ARKit and at the same time to make a video without using Screen Recording?

I am curious whether we could actually record 3 layers of video at the same time when ARKit is in session:

- Original video plate / static background

- Just the AR

- Some other layers


Really wanting to see example that uses Depth Disparity and Masking in a clever way to simulate Occlusion.

sceneView.session.currentFrame?.capturedImage can give all images you need to create video

For AR, you may use sceneView.session.currentFrame?.capturedImage to captre image one by one to create video


For overlay layers, capture images / videos based on what you use


I have been struggling for getting background video too. No document atm

I'm making a sample application that's feeding sceneView.session.currentFrame?.capturedImage into CoreML but I got a critical issue about performance. Whenever the CoreML preforms its own request It make the arkit rendering blocked despite I did all of coreml stuffs in a background thread.

I modified the sample below for using my own coreml model. Actually this sample was using a small mobilenet coreml model size 17mb so it's very hard to observe the flicking but my model size > 200MB then the arkit rendering would be blocked in a few miliseconds when CoreML preformed

https://github.com/hanleyweng/CoreML-in-ARKit

Do you have any idea please help.

I would like to do something similar, only because I need a higher res image than that is given by the sceneView.session.currentFrame?.capturedImage.


I am trying to run an AVCaptureSession alongside an ARSession and take a photo in the AVCaptureSession whenever I have selected something in the ARSession.

I am trying to do the same thing - I need to grab a raw still image on demand. I am currently using the currentImage in ARSession, but would like to get better resolution and colour depth. Ideally I would just like to add an output device e.g. AVCaptureOutput to the ARSession and grab a shot whenever needed. I have tried setting up my own AVCaptureSession but they don't seem to play nicely together. I will try momentarily pausing the ARSession while I grab the shot.
  • Did you manage to make AVCaptureSession work with ARSession this way?

    I'm trying the same here, but even after I stop my scene view object, AVCaptureDevice fails.

Add a Comment

we can access to AVCaptureDevice with new API Introduced

if let device = ARWorldTrackingConfiguration.configurableCaptureDeviceForPrimaryCamera {
   do {
      try device.lockForConfiguration()
      // configure AVCaptureDevice settings
      …
      device.unlockForConfiguration()
   } catch {
      // error handling
      …
   }
}

For more please checkout WWDC22 at 12:35