I have an ongoing desktop application where I'm adding an audio clip to a movie. What I have written is all based on resources that I've found for iOS. That's why I'm including the iOS tag as well.
Anyway, I have no trouble adding an audio file to a movie and then save it as a new file to disk. The problem that I have is that this new movie that my application has produced will become uneditable. So if I read a resulting movie and then try to save it as a new export file after adding CALayer objects or an audio clip to it, AVAssetExportSessionStatus will return failed. As long as I don't add an audio clip to a movie, a resulting file can be edited again.
The following is what I have.
// video //
let videoAsset = AVURLAsset(url: videoURL)
let videoTracks = videoAsset.tracks(withMediaType: .video)
let videoTrack = videoTracks[0]
let videoTimeRange = CMTimeRangeMake(kCMTimeZero, videoAsset.duration)
let composition = AVMutableComposition()
let compositionVideoTrack: AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID: CMPersistentTrackID())!
try! compositionVideoTrack.insertTimeRange(videoTimeRange, of: videoTrack, at: kCMTimeZero)
//compositionVideoTrack.preferredTransform = videoTrack.preferredTransform
// audio //
let audioAsset = AVURLAsset(url: audioURL)
let audioTracks = audioAsset.tracks(withMediaType: .audio)
let audioTrack = audioTracks[0]
let audioCompositionTrack: AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)!
let newTimeRange = (audioTrack.timeRange.duration > videoTrack.timeRange.duration) ? videoTrack.timeRange : audioTrack.timeRange
try! audioCompositionTrack.insertTimeRange(newTimeRange, of: audioTrack, at: kCMTimeZero)
// video layer //
let videoLayer = CALayer()
videoLayer.isHidden = false
videoLayer.opacity = 1.0
videoLayer.frame = CGRect(x: 0, y: 0, width: 1440.0, height: 900.0)
// parent layer //
let parentLayer = CALayer()
parentLayer.isHidden = false
parentLayer.opacity = 1.0
parentLayer.frame = CGRect(x: 0, y: 0, width: 1440.0, height: 900.0)
parentLayer.addSublayer(videoLayer)
// composition instructions //
let layerComposition = AVMutableVideoComposition()
layerComposition.frameDuration = CMTimeMake(1, 30)
layerComposition.renderSize = self.frameSize
layerComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: parentLayer)
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, composition.duration)
let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
layerInstruction.setTransform(videoTrack.preferredTransform, at: kCMTimeZero)
instruction.layerInstructions = [layerInstruction] as [AVVideoCompositionLayerInstruction]
layerComposition.instructions = [instruction] as [AVVideoCompositionInstructionProtocol]
let assetExport = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetHighestQuality)
...
...
Am I doing something wrong? Thanks.