Don't schedule your files (or segments or buffers) with a delayed start time - just nil - and schedule them (e.g. in a setup/prepare method) long before actually starting the player.
Then - rather delay your players at start-up...
If your engine is already running you got a @property lastRenderTime in AVAudioNode - your player's superclass - This is your ticket to 100% sample-frame accurate sync...
| AVAudioFormat *outputFormat = [playerA outputFormatForBus:0]; |
| |
| const float kStartDelayTime = 0.0; |
| |
| AVAudioFramePosition startSampleTime = playerA.lastRenderTime.sampleTime; |
| |
| AVAudioTime *startTime = [AVAudioTime timeWithSampleTime:(startSampleTime + (kStartDelayTime * outputFormat.sampleRate)) atRate:outputFormat.sampleRate]; |
| |
| [playerA playAtTime: startTime]; |
| [playerB playAtTime: startTime]; |
| [playerC playAtTime: startTime]; |
| [playerD playAtTime: startTime]; |
| [player... |
By the way - you can achieve the same 100% sample-frame accurate result with the AVAudioPlayer class...
| NSTimeInterval startDelayTime = 0.0; |
| |
| NSTimeInterval now = playerA.deviceCurrentTime; |
| |
| NSTimeIntervall startTime = now + startDelayTime; |
| |
| [playerA playAtTime: startTime]; |
| [playerB playAtTime: startTime]; |
| [playerC playAtTime: startTime]; |
| [playerD playAtTime: startTime]; |
| [player... |
With no startDelayTime the first 100-200ms of all players will get clipped off because the start command actually takes its time to the run loop although the players have already started (well, been scheduled) 100% in sync at now. But with a startDelayTime = 0.25 you are good to go. And never forget to prepareToPlay your players in advance so that at start time no additional buffering or setup has to be done - just starting them guys ;-)
I am having my iPhone 4s playing a lots of stereo tracks at the same time for hours in perfect sync ;-) I even sorted out a one frame glitch - traced back to a floating point rounding error! By rounding float seconds to integer frames before calculating any startingFrames and frameCounts you will stay 100% sample-frame accurate. (see end of the post)
ONE ADDITIONAL AUDIO PRO-TIP:
If you really wanna be sure that you have a perfect sync just use your favorite cd-ripped song.wav in 44.1kHz/16bit.
Make a copy of it and load it into an audio editor, inverse the phase and save it as is now.
When you now schedule both version at exactly the same startTime (and of course with the same volume/pan settings) you will hear - SILENCE...
Because of the phase inversion they cancel each other 100%.
If for some reason this perfect cancelation gets molested by any out-of-sync issue, you will hear clicks and other noises or in case of a frame loss (like dropping a single frame) you will suddenly hear your whole song but in a phlanging and phasing sound, sharp HIs and no LOWs etc. If you somewhere along your way win a frame back, your players could even go back to perfect sync again. But when dropping more frames (and so widening the gap) this phlanging will finally become a very short delay sound as your players drift apart.
If you are curious and just wanna hear how this sounds, force-delay one of two players by one single frame, using the same file on both players. Then add a second frame to your delay-variable and so on...
Later do the same with an in-phase and an inversed-phase version of the song scheduled on two player - just to hear the difference. Now you are an Audio-Pro and no framework, algorithm or programming language can do any harm to your music... ;-)
You can even do that:
Prepare your setup...
| audioSession = [AVAudioSession sharedInstance]; |
| [audioSession setCategory: AVAudioSessionCategoryPlayback error: nil]; |
| [audioSession setActive: YES error: nil]; |
| |
| NSString *soundFilePathA = [[NSBundle mainBundle] pathForResource: @"mySong-original" |
| ofType: @"wav"]; |
| NSString *soundFilePathB = [[NSBundle mainBundle] pathForResource: @"mySong-phase-inversed" |
| ofType: @"wav"]; |
| NSURL *fileURLforPlayerA = [[NSURL alloc] initFileURLWithPath: soundFilePathA]; |
| NSURL *fileURLforPlayerB = [[NSURL alloc] initFileURLWithPath: soundFilePathB]; |
| fileForPlayerA = [[AVAudioFile alloc] initForReading:fileURLforPlayerA error:nil]; |
| fileForPlayerB = [[AVAudioFile alloc] initForReading:fileURLforPlayerB error:nil]; |
| |
| engine = [[AVAudioEngine alloc]init]; |
| playerA = [[AVAudioPlayerNode alloc]init]; |
| playerB = [[AVAudioPlayerNode alloc]init]; |
| [engine attachNode:playerA]; |
| [engine attachNode:playerB]; |
| |
| mainMixer = [engine mainMixerNode]; |
| [engine connect:playerA to:mainMixer format:fileForPlayerA.processingFormat]; |
| [engine connect:playerB to:mainMixer format:fileForPlayerB.processingFormat]; |
| [engine startAndReturnError:nil]; |
Now split your song into regions and schedule them one after the other as segments of the original file onto playerA .
On playerB schedule the phase-inversed version as a whole piece!
| |
| |
| (NSTimeInterval) anchorPart1 = 0; |
| (NSTimeInterval) anchorPart2 = 13.13; |
| (NSTimeInterval) anchorPart3 = 24.95; |
| (NSTimeInterval) anchorPart4 = 36.78; |
| (NSTimeInterval) anchorPart5 = 48.45; |
| (NSTimeInterval) anchorPart6 = 71.77; |
| |
| |
| |
| |
| AVAudioFramePosition positionPart1 = (anchorPart1 * fileForPlayerA.fileFormat.sampleRate); |
| AVAudioFramePosition positionPart2 = (anchorPart2 * fileForPlayerA.fileFormat.sampleRate); |
| AVAudioFramePosition positionPart3 = (anchorPart3 * fileForPlayerA.fileFormat.sampleRate); |
| AVAudioFramePosition positionPart4 = (anchorPart4 * fileForPlayerA.fileFormat.sampleRate); |
| AVAudioFramePosition positionPart5 = (anchorPart5 * fileForPlayerA.fileFormat.sampleRate); |
| AVAudioFramePosition positionPart6 = (anchorPart6 * fileForPlayerA.fileFormat.sampleRate); |
| |
| |
| |
| |
| [playerA scheduleSegment:fileForPlayerA startingFrame:positionPart1 frameCount:(UInt32)((positionPart2 - positionPart1) + 0) atTime:nil completionHandler:nil]; |
| [playerA scheduleSegment:fileForPlayerA startingFrame:positionPart2 frameCount:(UInt32)((positionPart3 - positionPart2) + 0) atTime:nil completionHandler:nil]; |
| [playerA scheduleSegment:fileForPlayerA startingFrame:positionPart3 frameCount:(UInt32)((positionPart4 - positionPart3) + 0) atTime:nil completionHandler:nil]; |
| [playerA scheduleSegment:fileForPlayerA startingFrame:positionPart4 frameCount:(UInt32)((positionPart5 - positionPart4) + 0) atTime:nil completionHandler:nil]; |
| [playerA scheduleSegment:fileForPlayerA startingFrame:positionPart5 frameCount:(UInt32)((positionPart6 - positionPart5) + 0) atTime:nil completionHandler:nil]; |
| [playerA scheduleSegment:fileForPlayerA startingFrame:positionPart6 frameCount:(UInt32)(fileForPlayerA.length - positionPart6) atTime:nil completionHandler:nil]; |
| |
| |
| |
| [playerB scheduleFile:fileForPlayerB atTime:nil completionHandler:nil]; |
If the engine messes up only one single sample-frame on all these cues, the perfect SILENCE will be broken !!!
Now you can schedule another pair of in/out-of-phase player in parallel to see how your engine and device is doing with 4 stereo tracks, then two more... - until you hit the limit of your device ;-)
--------------------------------------------------------------------------
BTW: Like already mentioned above - If you don't do the casting on the 6 cue points above then you will notice that sometimes regions are droping a frame due to a floating point rounding error when calculating ---> frameCount:(UInt32)(positionPart3 - positionPart2) with NSTimeIntervals because the float value of both (the start and end) positions are somewhere between the actuall frame boundary (44100 frames/samples per second). From that moment on the two player are one sample off-sync and you'll will suddenly hear your audio. Why the frame drop?
Just imagine e.g.
| variable1 = round(10.7) |
| variable2 = round(10.7) |
| variable3 = variable1 + variable2 |
When assigning (casting) this float variable3 to an integer variable (like frameCount above) the result is
frameCount = (UInt32)variable3
without prior casting though
| variable1 = (10.7) |
| variable2 = (10.7) |
| variable3 = variable1 + variable2 |
When you now assign (cast) variable3 to frameCount the result is
frameCount = (UInt32)variable3
So you dropped one frame and the player is out-off-sync...
----