Adding Audio to Movie Makes Video Uneditable

I have an ongoing desktop application where I'm adding an audio clip to a movie. What I have written is all based on resources that I've found for iOS. That's why I'm including the iOS tag as well.


Anyway, I have no trouble adding an audio file to a movie and then save it as a new file to disk. The problem that I have is that this new movie that my application has produced will become uneditable. So if I read a resulting movie and then try to save it as a new export file after adding CALayer objects or an audio clip to it, AVAssetExportSessionStatus will return failed. As long as I don't add an audio clip to a movie, a resulting file can be edited again.


The following is what I have.


// video //
let videoAsset = AVURLAsset(url: videoURL)
let videoTracks = videoAsset.tracks(withMediaType: .video)
let videoTrack = videoTracks[0]
let videoTimeRange = CMTimeRangeMake(kCMTimeZero, videoAsset.duration)
let composition = AVMutableComposition()
let compositionVideoTrack: AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID: CMPersistentTrackID())!
try! compositionVideoTrack.insertTimeRange(videoTimeRange, of: videoTrack, at: kCMTimeZero)
//compositionVideoTrack.preferredTransform = videoTrack.preferredTransform


// audio //
let audioAsset = AVURLAsset(url: audioURL)
let audioTracks = audioAsset.tracks(withMediaType: .audio)
let audioTrack = audioTracks[0]
let audioCompositionTrack: AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)!
let newTimeRange = (audioTrack.timeRange.duration > videoTrack.timeRange.duration) ? videoTrack.timeRange : audioTrack.timeRange
try! audioCompositionTrack.insertTimeRange(newTimeRange, of: audioTrack, at: kCMTimeZero)


// video layer //
let videoLayer = CALayer()
videoLayer.isHidden = false
videoLayer.opacity = 1.0
videoLayer.frame = CGRect(x: 0, y: 0, width: 1440.0, height: 900.0)


// parent layer //
let parentLayer = CALayer()
parentLayer.isHidden = false
parentLayer.opacity = 1.0
parentLayer.frame = CGRect(x: 0, y: 0, width: 1440.0, height: 900.0)
parentLayer.addSublayer(videoLayer)


// composition instructions //
let layerComposition = AVMutableVideoComposition()
layerComposition.frameDuration = CMTimeMake(1, 30)
layerComposition.renderSize = self.frameSize
layerComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: parentLayer)
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, composition.duration)
let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
layerInstruction.setTransform(videoTrack.preferredTransform, at: kCMTimeZero)
instruction.layerInstructions = [layerInstruction] as [AVVideoCompositionLayerInstruction]
layerComposition.instructions = [instruction] as [AVVideoCompositionInstructionProtocol]


let assetExport = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetHighestQuality)
...
...

Am I doing something wrong? Thanks.

Replies

What is the value of the

error
property when the status is
.failed?

AVAssetExportSession.failed = 4

That looks like the raw value of the ".failed" enum case. What I was asking for was the NSError value returned by the error property.

Oh, I'm sorry about that. The following is what I get.


Error Domain=AVFoundationErrorDomain Code=-11841 "Operation Stopped" UserInfo={NSLocalizedDescription=Operation Stopped, NSLocalizedFailureReason=The video could not be composed.}

Apple's Objective-C sample project (AVSimpleEditoriOS) sets the mixAudio property to AVAssetExportSession. I don't. Many sample projects for iOS that I've explored don't. I wonder if they also face the same issue?

Well, the error message doesn't help much, except to confirm that it's a composition issue of some kind.


Have you tried setting a default AVAudioMix object on the session's audioMix property. Perhaps it won't mix unless you tell it to.


AVFoundation doesn't necessarily have the ability to handle arbitrary combinations of formats, so the other issue I'd suggest you look into is whether the audio tracks are in different formats.


But I don't have any better ideas than that.

Thanks for useful tips. There's no default value for the audioMix property. It's optional. It seems that you need to set it when you want to adjust the volume. So I guess I'll skip it. As for as the container format, I thought you hit a jack pot. But I guess not. If I add an m4a-container audio file to a movie, it will still become uneditable.

I haven't fixed the problem. Yet, I now understand what the problem is. If the base movie contains an audio track, a newly-added one must be mixed with it. That means a new AVMutableAudioMix object must be added to AVAssetExportSession.

By my reading of the documentation, you can set it to a "default AVAudioMix object" (i.e. AVAudioMix()). Any audio track not mentioned in an AVAudioMix object (whether it's really that or an AVMutableAudioMix instead) are supposed to get a default volume ramp. You apparently only need to construct a customized AVMutableAudioMix if you want a customized volume ramp.

You are right abou that, too.