Apple sample code "Detecting Human Actions in a Live Video Feed" - accessing the observations associated with an action prediction

I'm having trouble reasoning about and modifying the Detecting Human Actions in a Live Video Feed sample code since I'm new to Combine.

// ---- [MLMultiArray?] -- [MLMultiArray?] ----



// Make an activity prediction from the window.

.map(predictActionWithWindow)



// ---- ActionPrediction -- ActionPrediction ----



// Send the action prediction to the delegate.

.sink(receiveValue: sendPrediction)

These are the final two operators of the video processing pipeline, where the action prediction occurs. In either the implementation for private func predictActionWithWindow(_ currentWindow: [MLMultiArray?]) -> ActionPrediction or for private func sendPrediction(_ actionPrediction: ActionPrediction), how might I access the results of a VNHumanBodyPoseRequest that's retrieved and scoped in a function called earlier in the daisy chain?

When I did this imperatively, I accessed results in the VNDetectHumanBodyPoseRequest completion handler, but I'm not sure how data flow would work with Combine's programming model. I want to associate predictions with the observation results they're based on so that I can store the time range of a given prediction label.

Apple sample code "Detecting Human Actions in a Live Video Feed" - accessing the observations associated with an action prediction
 
 
Q