Vision/CoreML and Concurrency

I am working with the

Vision
and
CoreML
frameworks. I have a real time video feed. For every frame, I first detect rectangles using
VNDetectRectanglesRequest
. For every rectangle I detect, I crop out that part of the image and perform a
VNCoreMLRequest
to classify that part of the image. After classifying the object, if it is the object type I am looking for, I draw the rectangle. It's like I built an object detector when I don't have data to train an actual neural network for detection.


Generally, I detect around 1 to 3 rectangles. So for every

VNDetectRectanglesRequest
, I have 1 to 3 additional
VNCoreMLRequest
per frame to perform. However, performing all these requests make my video stream very laggy. It's quite noticeable when I point my camera at rectangularly shaped objects. I guess I should add that this video footage is coming from
ARKit
, so whatever background operations
ARKit
is performing might have made the lag worse.


I tried to optimize the code using

DispatchQueue
. Below is my pseudo-code. I'm happy with what the code is doing, but I need to get rid of the lag. Thoughts?
DispatchQueue.global(qos: .background).async {
     let request = VNDetectRectanglesRequest(completionHandler: { (request, error) in
          // ...
          for observation in request.results {
               let mlRequest = VNCoreMLRequest(model: model){ (request, error) in 
                    // classify ... if is object I want then jump back to main queue and draw
                    DispatchQueue.main.async
                    { 
                         // draw rectangle
                    }
                })
                // perform VNCoreMLRequest
          }
     })
     // perform VNDetectRectanglesRequest

}