Hi,
I migrate an audio project using AVAudioEngine from Objective-C to Swift.
All objects are the same but in Swift instead of Objective-C.
The application uses the microphone input and analyses it via a renderCallback process calling the FFT of the Accelerate framework.
The session has an important CPU usage:
- TheObjective-C version keeps the CPU usage at 8%
- The Swift version is on average at 20%.
I have isolated the code that originates the increase and it is due to a method analysing the returned samples in order to summarize them.
Parameters:
- arrMagCount = 12
- arrMag is an array of float containing the magnitude of the frequency range related to the index
- samplesCount = 4096
- samples is a pointer of a float array delivering the magnitude of each bins used
Result: Each arrMag[index] is updated with the magnitude related to the range of frequency underlying the index
The Objective-C version calls a C process for the function
The Swift version was developped in swift since we cannot mix C code and Swift code in the same source
func arrMagFill(arrMagCount:Int, samplesCount:Int, samples:UnsafePointer<Double>, arrMag:UnsafeMutablePointer<Float>, maxFrequency:CGFloat) {
// Zeroes arrMag.
memset(arrMag, 0, arrMagCount)
for iBin in 0..<samplesCount {
for iLoop in 0..<arrMagCount {
if maxFrequency < g.FreqMetersRangeMaxValue[iLoop] {
arrMag[iLoop] = Float(samples[Int(iBin)])
break
}
}
}
}
When I use a C equivalent code via an Objective-C bridge Header, the CPU usage is back to [8..12] on average.
Is there a way in Swift to obtain directly the same performances ?