I'm working on a graphics application, and I'm experimenting with parallelizing the rendering and simulation. My first attempt has been using three queues: the main queue for simulation, a queue for rendering, and a queue for synchronization:
let renderQueue = DispatchQueue(label: "render", qos: .userInitiated)
let syncQueue = DispatchQueue(label: "sync", qos: .userInitiated)
// The currentState is the shared state between rendering and simulation
var currentState = SimState()
func run() {
DispatchQueue.main.async {
while true {
let newState = simulation.update(currentState)
syncQueue.sync { currentState = newState }
}
}
renderQueue.async {
var renderState: SimState!
syncQueue.sync { renderState = currentState }
renderer.render(renderState)
}
}
So this works, but with performance stutters. When I profile the application, I can see that there are periods where all my queues are blocked at the same time.
I notice that in the slides for the "metal game performance optimization" from Apple, they are actually using pthread primitives for parallelization.
So is pthread just more suitable for performance-critical parallelization, or is Dispatch still suitable for this application?