0 Replies
      Latest reply on Apr 13, 2019 10:24 AM by joshda
      joshda Level 1 Level 1 (0 points)

        I'm working with a project where for user testing, we have a device that records an audio file of the whole test session plus video in 5-minute chunks.  I've written a small macOS project to combine all those media files into one.  It works perfectly with AVPlayer, so I know the composition is good.  However, when I try to export the composition, I get this error:

         

        GVA error: scheduleDecodeFrame kVTVideoDecoderBadDataErr nal err : acc_size = 36, datasize = 36, video_nal_count = 0, first_preslicehdr_count = 2 ...

         

        I tried using an AVAssetExportSession at first, but when that failed with the above error, I then tried hooking up an AVAssetReader to the composition and a writer to create the file.  It had the same result.  I also tried reducing this so that there's no audio and just 1 video file--so nothing to combine--and it still fails.

         

        Here's the code that builds the composition.  The video starts after the audio, and due to some device issues there might be some empty frames between video files, hence the black frames.  I'd appreciate any insight into what's going on or thoughts about how to get what the AVPlayer is correctly showing written to disk.  Thanks!

         

        - (void)_buildMovie {
            AVMutableComposition *mutableComposition = [AVMutableComposition composition];
            
            // video might be shorter with how we log, so base things around the video
            AVMutableCompositionTrack *videoCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
            
            CGSize size = CGSizeZero;
            CMTime time = kCMTimeZero;
            NSMutableArray *instructions = [NSMutableArray new];
            NSError *videoError = nil;
            
            NSUInteger movieIndex = 0;;
            for (movieIndex = 0; movieIndex < self.session.videoFileURLs.count; movieIndex++) {
                NSURL *movieURL = [self.session.videoFileURLs objectAtIndex:movieIndex];
                AVAsset *asset = [AVAsset assetWithURL:movieURL];
                AVAssetTrack *videoAssetTrack = [asset tracksWithMediaType:AVMediaTypeVideo].firstObject;
                if (!videoAssetTrack) {
                    continue;
                }
                
                // make sure it's valid
                if (CMTIME_IS_INVALID(videoAssetTrack.timeRange.duration) ||
                    (CMTimeCompare(videoAssetTrack.timeRange.duration, kCMTimeZero) == 0)) {
                    continue;
                }
                
                // do we need to add black before this movie to account for frames that weren't recorded?
                NSString *timeStr = [[movieURL lastPathComponent] stringByDeletingPathExtension];
                NSUInteger nextTimeMS = ((NSUInteger)[timeStr integerValue] /1000) - self.session.firstMovieTimestamp;
                // just convert to seconds to compare
                float gap = (nextTimeMS/1000.0) - CMTimeGetSeconds(time);
                // slop in videos -- allow up to a 1/10 sec gap since we seem to record around 11 fps
                if (gap > 0.1) {
                    // need to add black to make up for the gap
                    NSLog(@"Found %f sec gap after video %lu - filling with black", gap, movieIndex);
                    time = [self _addBlackForGap:gap intoVideoTrack:videoCompositionTrack atTime:time instructions:instructions];
                    
                }
                
                [videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, videoAssetTrack.timeRange.duration)
                                               ofTrack:videoAssetTrack
                                                atTime:time
                                                 error:&videoError];
                if (videoError) {
                    NSLog(@"Error - %@", videoError.debugDescription);
                }
        
                AVMutableVideoCompositionLayerInstruction *layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoAssetTrack];
                // removed some code for posting that handles rotated video and isn't relevant to the issue
                AVMutableVideoCompositionInstruction *videoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
                videoCompositionInstruction.timeRange = CMTimeRangeMake(time, videoAssetTrack.timeRange.duration);
                videoCompositionInstruction.layerInstructions = @[layerInstruction];
                [instructions addObject:videoCompositionInstruction];
                
                time = CMTimeAdd(time, videoAssetTrack.timeRange.duration);
                
                if (CGSizeEqualToSize(size, CGSizeZero)) {
                    size = videoAssetTrack.naturalSize;
                }
            }
           
            // add the audio
            AVAsset *audioAsset = [AVAsset assetWithURL:self.session.masterAudioURL];
            AVAssetTrack *audioAssetTrack = [audioAsset tracksWithMediaType:AVMediaTypeAudio].firstObject;
        
        
            // get the first movie and audio times.  they're in seconds * 1,000,000
            NSUInteger realAudioStartTime = self.session.audioTimestamp;
            NSUInteger realVideoStartTime = self.session.firstMovieTimestamp;
            
            CMTime audioOffset = kCMTimeZero;
            CMTime audioTrim = kCMTimeZero;
            CMTime audioOffsetIntoVideo = kCMTimeZero;
            
            CMTime audioDuration = audioAssetTrack.timeRange.duration;
            CMTime videoDuration = videoCompositionTrack.timeRange.duration;
        
        
            if (realAudioStartTime < realVideoStartTime) {
                // audio starts before the video.  need to offset the start time into it
                NSUInteger diff = realVideoStartTime - realAudioStartTime;
                audioOffset = CMTimeMake(diff, 1000); // audio is in seconds * 1,000,000
                
                // we need to trim the offset into the audio file plus anything after the duration of the video
                // how much longer is the audio than the video overall?
                CMTime endTrim = CMTimeSubtract(audioDuration, videoDuration);
                // now snip out how much we're offsetting
                endTrim = CMTimeSubtract(endTrim, audioOffset);
                // that's what we'll remove from the overall duration we grab
                audioTrim = CMTimeAdd(audioOffset, endTrim);
            } else if (realAudioStartTime > realVideoStartTime){
                // audio starts after the video.  need to offset where we insert it
                NSUInteger diff = realAudioStartTime - realVideoStartTime;
                audioOffsetIntoVideo = CMTimeMake(diff, 1000); // audio is in seconds * 1,000,000
        
        
                // if the audio duration plus this offset longer than the video?  if so, trim the audio
                CMTime audioDurationAndOffset = CMTimeAdd(audioDuration, audioOffsetIntoVideo);
                if (CMTimeCompare(audioDurationAndOffset, videoDuration) > 1) {
                    // -1 means video duration > audio duration and offset.  1 is audio plays after the video ends
                    audioTrim = CMTimeSubtract(audioDurationAndOffset, videoDuration);
                }
            }
            
            AVMutableCompositionTrack *audioCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
            
            NSError *audioError;
            [audioCompositionTrack insertTimeRange:CMTimeRangeMake(audioOffset, CMTimeSubtract(audioAssetTrack.timeRange.duration, audioTrim))
                                           ofTrack:audioAssetTrack
                                            atTime:audioOffsetIntoVideo
                                             error:&audioError];
            if (audioError) {
                NSLog(@"Error - %@", audioError.debugDescription);
            }
            
            
            AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition];
            mutableVideoComposition.instructions = instructions;
            mutableVideoComposition.frameDuration = CMTimeMake(1, 30);
            mutableVideoComposition.renderSize = size;
            
            // save the composition for the exporter later
            self.composition = mutableComposition;
            self.videoComposition = mutableVideoComposition;
        }
        

        The saving code is:

            AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:self.composition presetName:AVAssetExportPresetMediumQuality];    
            exportSession.outputURL = fileURL; // defined elsewhere.  it's valid.
            exportSession.outputFileType = AVFileTypeAppleM4V;
            exportSession.videoComposition = self.videoComposition;
            
            
            [exportSession exportAsynchronouslyWithCompletionHandler:^(void) {
                switch (exportSession.status) {
                    case AVAssetExportSessionStatusCompleted:
                        NSLog(@"Completed");
                        break;
                    case AVAssetExportSessionStatusFailed:
                        NSLog(@"Failed:%@",exportSession.error);
                        break;
                    case AVAssetExportSessionStatusCancelled:
                        NSLog(@"Canceled:%@",exportSession.error);
                        break;
                    default:
                        break;
                }
            }];
        

        Also, when I tried directly using AVAssetWriter/Reader, it looks like this error pops up when I call copyNextSampleBuffer in here:

        - (BOOL)encodeSamplesFromOutput:(AVAssetReaderOutput *)output toInput:(AVAssetWriterInput *)input {
            while (input.isReadyForMoreMediaData) {
                CMSampleBufferRef sampleBuf = [output copyNextSampleBuffer];