I am trying to run some long running code in an async queue in concurrent and serial mode. I am getting much better execution time in serial mode (and when using DispatchQueue.sync). When I check CPU utilization, I see all my cores 100% utilized in concurent mode and 3 cores barely running in serial. Yet serial is 3x times faster...
import Foundation
var data = [Int].init(repeating: 0, count: 10_000_000)
let totalStartTime = Date()
var g = DispatchGroup()
var q = DispatchQueue(label: "myQueue", qos: .default, attributes: [.concurrent])
(0..<8).forEach { i in
q.async(group: g) {
let startTime = Date()
for i in data.indices { data[i] = Int(arc4random_uniform(1000)) }
print("\((Date().timeIntervalSince(startTime)))", "s")
}
}
g.wait()
print("Total:", "\((Date().timeIntervalSince(totalStartTime)))", "s")
Output from concurrent,
using attributes: [.concurrent]:
10.6698750257492 s
10.6752609610558 s
10.6778860092163 s
10.6787539720535 s
10.6814050078392 s
10.6833950281143 s
10.6855019927025 s
10.6869139671326 s
Total: 10.6888610124588 s
Output from serial,
using attributes: []:
0.451411008834839 s
0.437644004821777 s
0.432780981063843 s
0.431493043899536 s
0.447892963886261 s
0.426267981529236 s
0.42144501209259 s
0.421342968940735 s
Total: 3.472864985466 s
I'd expect the individual times being approx. the same for concurrent and serial mode and the total time several times faster in concurrent, not the other way around. Am I missing anything?