Posts

Post not yet marked as solved
4 Replies
4.6k Views
Some of my code for combining an .mp4 and an .aac file together into an .mov file worked just fine since its very beginning, it creats an AVMutableComposition with 1 video track and 1 audio track from the input audio and video files. After exportAsynchronously(), the AVAssetExportSession gets AVAssetExportSession.Status.completed. Here is the Swift source code of the key function: func compileAudioAndVideoToMovie(audioInputURL:URL, videoInputURL:URL) { let docPath:String = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]; let videoOutputURL:URL = URL(fileURLWithPath: docPath).appendingPathComponent("video_output.mov"); do { try FileManager.default.removeItem(at: videoOutputURL); } catch {} let mixComposition = AVMutableComposition(); let videoTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid); let videoInputAsset = AVURLAsset(url: videoInputURL); let audioTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: kCMPersistentTrackID_Invalid); let audioInputAsset = AVURLAsset(url: audioInputURL); do { try videoTrack?.insertTimeRange(CMTimeRangeMake(start: CMTimeMake(value: 0, timescale: 1000), duration: CMTimeMake(value: 3000, timescale: 1000)), of: videoInputAsset.tracks(withMediaType: AVMediaType.video)[0], at: CMTimeMake(value: 0, timescale: 1000));// Insert an 3-second video clip into the video track try audioTrack?.insertTimeRange(CMTimeRangeMake(start: CMTimeMake(value: 0, timescale: 1000), duration: CMTimeMake(value: 3000, timescale: 1000)), of: audioInputAsset.tracks(withMediaType: AVMediaType.audio)[0], at: CMTimeMake(value: 0, timescale: 1000));// Insert an 3-second audio clip into the audio track let assetExporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetPassthrough); assetExporter?.outputFileType = AVFileType.mov; assetExporter?.outputURL = videoOutputURL; assetExporter?.shouldOptimizeForNetworkUse = false; assetExporter?.exportAsynchronously { switch (assetExporter?.status) { case .cancelled: print("Exporting cancelled"); case .completed: print("Exporting completed"); case .exporting: print("Exporting ..."); case .failed: print("Exporting failed"); default: print("Exporting with other result"); } if let error = assetExporter?.error { print("Error:\n\(error)"); } } } catch { print("Exception when compiling movie"); } }However, after I upgraded my iPhone to iOS13(beta), it always ends up with .failed status, and the AVAssetExportSession.error reads: Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12735), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x2815e9bc0 {Error Domain=NSOSStatusErrorDomain Code=-12735 "(null)”}}I've tested this on iPhone6Plus and iPhone7, both given the same result.You can clone my minimal demo project with sample input audio and video files embedded in its bundle from here:https://github.com/chenqiu1024/iOS13VideoRecordingError.git, run it and check the console output.Is there any explanation and suggestion?
Posted
by Cyllenge.
Last updated
.
Post not yet marked as solved
0 Replies
1.5k Views
I am working on an iOS app for mixing an audio file with the user's voice input into a new file, and playing the content of this audio file simultaneously. You can just consider this app as a Karaoke player which records both the singer’s voice and the original soundtrack into file while playing the original soundtrack for the singer.I use AUGraph to establish the audio process flow as:One mixer AudioUnit (with type kAudioUnitType_Mixer and subtype kAudioUnitSubType_MultiChannelMixer) with 2 inputs;One resampler AudioUnit for necessary sample rate converting( kAudioUnitType_FormatConverter, kAudioUnitSubType_AUConverter) from the sample rate of the audio file to that of the mic input in order to make formats of the mixer's 2 input buses match;Then connect these nodes: Mic’s output(output element of IO node's input scope) —> Mixer’s input 0, Resampler’s input —> Mixer’s input 1, Mixer’s output —> Speaker’s input(input element of IO node's output scope);Set a render callback to the resampler by AUGraphSetNodeInputCallback(). This callback function just copies demanded number of audio data frames from the source audio file stream to the destination AudioBufferList, so that the sound is played by the speaker;Set a render notify callback to the mixer by AudioUnitAddRenderNotify(). The callback function pulls the mixed audio data and writes it into the destination file.This process flow works well, but with a fault: It plays the audio mixed with the mic input to the speaker, which is not desired. What I need is to play only the sound from the source audio file data to the speaker, without the singer's voice mixed. The mixed audio is for recording only, not for playback.I’ve tried several modifications to the above AUGraph, but no one works. An instinctive thought is to use a ’splitter’ audio unit to duplicate resampler’s output audio data into 2 streams, one connected to the mixer’s input and another to the input component of IO node’s output scope(i.e., the speaker's input bus). However, as Apple mentioned in AUComponent.h," Except for AUConverter, which is available on both desktop and iPhone, these audio units are only available on the desktop.”That means we can’t use AudioUnit either with subtype kAudioUnitSubType_Splitter or kAudioUnitSubType_MultiSplitter on iOS.Actually I’ve tried adding an AudioUnit with subtype kAudioUnitSubType_MultiSplitter and constructing the AUGraph as I thought, and of course with no miracle.So how should I implement this feature?
Posted
by Cyllenge.
Last updated
.