Post

Replies

Boosts

Views

Activity

Visualization Kernel for Optical Flow
Hey there, I am currently working on a project that can heavily benefit from optical flow. So I am really happy Apple provided an API to generate it. But before I will put further work into it I want to investigate how accurate the results are. In there WWDC20 presentation Apple had a very cool visualization of the optical flow using a custom CIFilter. There kernel code wasn't in the presentation but they stated they will publish it in the slide attachments. Sadly, I can't happen to find these attachments. At least they are not listed in the "Resources" segment for that session. Does anyone know where I might find the kernel code instead? Thanks and best regards, Max PS: Here is the link to the session I was talking about. The requested visualization can be found at 23 minutes. https://developer.apple.com/videos/play/wwdc2020/10673
3
0
1.6k
Sep ’20
AVAssetWriter with multiple Video Inputs
Hello there! I have a question regarding the AVAssetWriter and writing multiple video tracks in a single file. This is a variant supported by the MP4 container format and I was able to do it under iOS 14 without any issues. What I did was creating a single AVAssetWriter and an AVAssetWriterInput for each video track. I gathered all tracks in a single AVAssetWriterInputGroup which I then added to the AVAssetWriter. Additionally, I added an input for writing the videos audio track. All that worked great, however, after I installed the iOS 15 beta, it did not. What I found out is that I can only write 2 video tracks under iOS 15, while I was able to write 3 in a single file under iOS 14. When I try to write 3 tracks with audio, the AVAssetWriter crashes with an error specifing it failed to encode the video as soon as the first audio buffer is added. When writing without audio, everything works as intended, but after finishing the writer, the video is empty/corrupted. Here you can find my codee for setting up the AVAssetWriter: private func setupWriter(withTarget url: URL, fileType: AVFileType, videoConfig: MultidimensionalVideoWritingConfig, audioConfig: AudioWritingConfig?, defaultType: VideoType = .rgb) { do { writer = try AVAssetWriter(url: url, fileType: fileType) } catch { fatalError("Could not create AVAssetWriter: \(error.localizedDescription)") } for (type, config) in videoConfig { let (input, adaptor) = createFrameInput(for: type, from: config) frameInputs[type] = input frameAdaptors[type] = adaptor if writer.canAdd(input) { writer.add(input) } else { fatalError("Could not add \(type.rawValue) frame input to writer.") } } let inputs = Array(frameInputs.values) let defaultInput = frameInputs[defaultType] let inputGroup = AVAssetWriterInputGroup(inputs: inputs, defaultInput: defaultInput) if writer.canAdd(inputGroup) { writer.add(inputGroup) } else { fatalError("Could not add input group to writer.") } if let audioSettings = audioConfig { let input = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings) input.expectsMediaDataInRealTime = true if writer.canAdd(input) { writer.add(input) } else { fatalError("Could not add audio input to writer.") } audioInput = input } } private func createFrameInput(for type: VideoType, from config: VideoTrackWritingConfig) -> (AVAssetWriterInput, AVAssetWriterInputPixelBufferAdaptor) { let size = config.size let frameSettings: [String: Any] = [ AVVideoCodecKey: config.codec, AVVideoWidthKey: size.width, AVVideoHeightKey: size.height ] let pixelBufferSettings: [String: Any] = [ kCVPixelBufferPixelFormatTypeKey as String: NSNumber(value: config.format), kCVPixelBufferWidthKey as String: size.width, kCVPixelBufferHeightKey as String: size.height ] let frameInput = AVAssetWriterInput(mediaType: .video, outputSettings: frameSettings) frameInput.metadata = createMetadata(for: type) frameInput.transform = config.transform frameInput.expectsMediaDataInRealTime = true let frameAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: frameInput, sourcePixelBufferAttributes: pixelBufferSettings) return (frameInput, frameAdaptor) } private func createMetadata(for type: VideoType) -> [AVMutableMetadataItem] { let typeInfo = metadataItem(.commonIdentifierTitle, value: type.rawValue) return [typeInfo] } private func metadataItem(_ identifier: AVMetadataIdentifier, value: String) -> AVMutableMetadataItem { let item = AVMutableMetadataItem() item.identifier = identifier item.value = value as NSString return item } I know it's well possible for this to be a bug as it's the iOS 15 beta. However, it's oddly specific for it to work with 2 video tracks, but not with 3. So if you have any idea what this yould be, please let me know. Maybe I made some configuration error that just happend to work under iOS 14 for some reason. Thanks!
2
2
1.7k
Jun ’21