Dear AVFoundation Engineers & other AVFoundation developers,
In the context of a multilayer video editing timeline where there are 4 or more layers, I want to know if it is a problem to have just one AVVideoCompositionInstruction for the entire time range of the timeline? The parameter requiredSourceTrackIds will be all the tracks added to AVMutableComposition, containsTweening will be true, etc. Then at any frame time, the custom compositor could consult its own internal data structures and blend video frames of different tracks as required. I want to know if there is something wrong in this approach from the performance perspective, especially on new iOS devices(iPhone 7 or later)?
In the context of a multilayer video editing timeline where there are 4 or more layers, I want to know if it is a problem to have just one AVVideoCompositionInstruction for the entire time range of the timeline? The parameter requiredSourceTrackIds will be all the tracks added to AVMutableComposition, containsTweening will be true, etc. Then at any frame time, the custom compositor could consult its own internal data structures and blend video frames of different tracks as required. I want to know if there is something wrong in this approach from the performance perspective, especially on new iOS devices(iPhone 7 or later)?