Posts

Post not yet marked as solved
0 Replies
228 Views
Hey all! I'm building a Camera app using AVFoundation, and I am using AVCaptureVideoDataOutput and AVCaptureAudioDataOutput delegates. (I cannot use AVCaptureMovieFileOutput because I am doing some processing inbetween) When recording the audio CMSampleBuffers to the AVAssetWriter, I noticed that compared to the stock iOS camera app, they are mono-audio, not stereo audio. I wonder how recording in stereo audio works, are there any guides or documentation available for that? Is a stereo audio frame still one CMSampleBuffer, or will it be multiple CMSampleBuffers? Do I need to synchronize them? Do I need to set up the AVAssetWriter/AVAssetWriterInput differently? This is my Audio Session code: func configureAudioSession(configuration: CameraConfiguration) throws { ReactLogger.log(level: .info, message: "Configuring Audio Session...") // Prevent iOS from automatically configuring the Audio Session for us audioCaptureSession.automaticallyConfiguresApplicationAudioSession = false let enableAudio = configuration.audio != .disabled // Check microphone permission if enableAudio { let audioPermissionStatus = AVCaptureDevice.authorizationStatus(for: .audio) if audioPermissionStatus != .authorized { throw CameraError.permission(.microphone) } } // Remove all current inputs for input in audioCaptureSession.inputs { audioCaptureSession.removeInput(input) } audioDeviceInput = nil // Audio Input (Microphone) if enableAudio { ReactLogger.log(level: .info, message: "Adding Audio input...") guard let microphone = AVCaptureDevice.default(for: .audio) else { throw CameraError.device(.microphoneUnavailable) } let input = try AVCaptureDeviceInput(device: microphone) guard audioCaptureSession.canAddInput(input) else { throw CameraError.parameter(.unsupportedInput(inputDescriptor: "audio-input")) } audioCaptureSession.addInput(input) audioDeviceInput = input } // Remove all current outputs for output in audioCaptureSession.outputs { audioCaptureSession.removeOutput(output) } audioOutput = nil // Audio Output if enableAudio { ReactLogger.log(level: .info, message: "Adding Audio Data output...") let output = AVCaptureAudioDataOutput() guard audioCaptureSession.canAddOutput(output) else { throw CameraError.parameter(.unsupportedOutput(outputDescriptor: "audio-output")) } output.setSampleBufferDelegate(self, queue: CameraQueues.audioQueue) audioCaptureSession.addOutput(output) audioOutput = output } } This is how I activate the audio session just before I start recording: let audioSession = AVAudioSession.sharedInstance() try audioSession.updateCategory(AVAudioSession.Category.playAndRecord, mode: .videoRecording, options: [.mixWithOthers, .allowBluetoothA2DP, .defaultToSpeaker, .allowAirPlay]) if #available(iOS 14.5, *) { // prevents the audio session from being interrupted by a phone call try audioSession.setPrefersNoInterruptionsFromSystemAlerts(true) } if #available(iOS 13.0, *) { // allow system sounds (notifications, calls, music) to play while recording try audioSession.setAllowHapticsAndSystemSoundsDuringRecording(true) } audioCaptureSession.startRunning() And this is how I set up the AVAssetWriter: let audioSettings = audioOutput.recommendedAudioSettingsForAssetWriter(writingTo: options.fileType) let format = audioInput.device.activeFormat.formatDescription audioWriter = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings, sourceFormatHint: format) audioWriter!.expectsMediaDataInRealTime = true assetWriter.add(audioWriter!) ReactLogger.log(level: .info, message: "Initialized Audio AssetWriter.") The rest is trivial - I receive CMSampleBuffers of the audio in my delegate's callback, write them to the audioWriter, and it ends up in the .mov file - but it is not stereo, it's mono. Is there anything I'm missing here?
Posted
by mrousavy.
Last updated
.
Post not yet marked as solved
2 Replies
541 Views
’m using the AVFoundation Swift APIs to record a Video (CMSampleBuffers) and Audio (CMSampleBuffers) to a file using AVAssetWriter. Initializing the AVAssetWriter happens quite quickly, but calling assetWriter.startWriting() fully blocks the entire application AND ALL THREADS for 3 seconds. This only happens in Debug builds, not in Release. Since it blocks all Threads and only happens in Debug, I’m lead to believe that this is an Xcode/Debugger/LLDB hang issue that I’m seeing. Does anyone experience something similar? Here’s how I set all of that up: startRecording(...) And here’s the line that makes it hang for 3+ seconds: assetWriter.startWriting(...)
Posted
by mrousavy.
Last updated
.
Post not yet marked as solved
1 Replies
425 Views
I'm building a Camera app, where I have two AVCaptureSessions, one for video and one for audio. (See this for an explanation why I don't just have one). I receive my CMSampleBuffers in the AVCaptureVideoDataOutput and AVCaptureAudioDataOutput delegates. Now, when I enable the video stabilization mode "cinematicExtended", the AVCaptureVideoDataOutput has a 1-2 seconds delay, meaning I will receive my audio CMSampleBuffers 1-2 seconds earlier than I will receive my video CMSampleBuffers! This is the code: func captureOutput(_ captureOutput: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from _: AVCaptureConnection) { let type = captureOutput is AVCaptureVideoDataOutput ? "Video" : "Audio" let timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) print("Incoming \(type) buffer at \(timestamp.seconds) seconds...") } Without video stabilization, this logs: Incoming Audio frame at 107862.52558333334 seconds... Incoming Video frame at 107862.535921166 seconds... Incoming Audio frame at 107862.54691666667 seconds... Incoming Video frame at 107862.569257333 seconds... Incoming Audio frame at 107862.56825 seconds... Incoming Video frame at 107862.585925333 seconds... Incoming Audio frame at 107862.58958333333 seconds... With video stabilization, this logs: Incoming Audio frame at 107862.52558333334 seconds... Incoming Video frame at 107861.535921166 seconds... Incoming Audio frame at 107862.54691666667 seconds... Incoming Video frame at 107861.569257333 seconds... Incoming Audio frame at 107862.56825 seconds... Incoming Video frame at 107861.585925333 seconds... Incoming Audio frame at 107862.58958333333 seconds... As you can see, the video frames arrive almost a full second later than when they are intended to be presented! There are a few guides on how to use AVAssetWriter online, but all recommend to start the AVAssetWriter session once the first video frame arrives - in my case I cannot do that, since the first 1 second of video frames is from before the user even started the recording. I also can't really wait 1 second here, as then I would lose 1 second of audio samples, since those are realtime and not delayed. I also can't really start the session on the first audio frame and drop all video frames until that point, since then the resulting video would start with one blank frame, as the video frame is never exactly on that first audio frame timestamp. Any advices on how I can synchronize that? Here is my code: RecordingSession.swift
Posted
by mrousavy.
Last updated
.
Post marked as solved
1 Replies
554 Views
Hey all! I'm trying to record Video from one AVCaptureSession, and Audio from another AVCaptureSession. The reason I'm using two separate capture sessions is because I want to disable and enable the Audio one on the fly without interrupting the Video session. I believe Snapchat and Instagram also use this approach, as background music keeps playing when you open the Camera, and only slightly stutters (caused by the AVAudioSession.setCategory(..) call) once you start recording. However I couldn't manage to synchronize the two AVCaptureSessions, and whenever I try to record CMSampleBuffers into an AVAssetWriter, the video and audio frames are out of sync. Here's a quick YouTube video showcasing the offset: https://youtube.com/shorts/jF1arThiALc I notice two bugs: The video and audio tracks are out of sync - video frames start almost a second before the first audio sample starts to be played back, and towards the end the delay is also noticeable because the video stops / freezes while the audio continues to play. The video contains frames from BEFORE I even pressed startRecording(), as if my iPhone had a time machine! I am not sure how the second one can even happen, so at this point I'm asking for help if anyone has any experience with that. Roughly my code: let videoCaptureSession = AVCaptureSession() let audioCaptureSession = AVCaptureSession() func setup() { // ...adding videoCaptureSession outputs (AVCaptureVideoDataOutput) // ...adding audioCaptureSession outputs (AVCaptureAudioDataOutput) videoCaptureSession.startRunning() } func startRecording() { self.assetWriter = AVAssetWriter(outputURL: tempURL, fileType: .mov) self.videoWriter = AVAssetWriterInput(...) assetWriter.add(videoWriter) self.audioWriter = AVAssetWriterInput(...) assetWriter.add(audioWriter) AVAudioSession.sharedInstance().setCategory(.playAndRecord, options: [.mixWithOthers, .defaultToSpeaker]) audioCaptureSession.startRunning() // <-- lazy start that } func captureOutput(_ captureOutput: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from _: AVCaptureConnection) { // Record Video Frame/Audio Sample to File in custom `RecordingSession` (AVAssetWriter) if isRecording { switch captureOutput { case is AVCaptureVideoDataOutput: self.videoWriter.append(sampleBuffer) case is AVCaptureAudioDataOutput: // TODO: Do I need to update the PresentationTimestamp here to synchronize it to the other capture session? or not? self.audioWriter.append(sampleBuffer) default: break } } } Full code here: Video Capture Session Configuration Audio Capture Session Configuration Later on, startRecording() call RecordingSession, my AVAssetWriter abstraction Audio Session activation And finally, writing the CMSampleBuffers
Posted
by mrousavy.
Last updated
.
Post not yet marked as solved
1 Replies
584 Views
Hey all! I'm trying to build a Camera app that records Video and Audio buffers (AVCaptureVideoDataOutput and AVCaptureAudioDataOutput) to an mp4/mov file using AVAssetWriter. When creating the Recording Session, I noticed that it blocks for around 5-7 seconds before starting the recording, so I dug deeper to find out why. This is how I create my AVAssetWriter: let assetWriter = try AVAssetWriter(outputURL: tempURL, fileType: .mov) let videoWriter = self.createVideoWriter(...) assetWriter.add(videoWriter) let audioWriter = self.createAudioWriter(...) assetWriter.add(audioWriter) assetWriter.startWriting() There's two slow parts here in that code: The createAudioWriter(...) function takes ages! This is how I create the audio AVAssetWriterInput: // audioOutput is my AVCaptureAudioDataOutput, audioInput is the microphone let settings = audioOutput.recommendedAudioSettingsForAssetWriter(writingTo: .mov) let format = audioInput.device.activeFormat.formatDescription let audioWriter = AVAssetWriterInput(mediaType: .audio, outputSettings: settings, sourceFormatHint: format) audioWriter.expectsMediaDataInRealTime = true The above code takes up to 3000ms on an iPhone 11 Pro! When I remove the recommended settings and just pass nil as outputSettings: audioWriter = AVAssetWriterInput(mediaType: .audio, outputSettings: nil) audioWriter.expectsMediaDataInRealTime = true ...It initializes almost instantly - something like 30 to 50ms. Starting the AVAssetWriter takes ages! Calling this method: assetWriter.startWriting() ...takes takes 3000 to 5000ms on my iPhone 11 Pro! Does anyone have any ideas why this is so slow? Am I doing something wrong? It feels like passing nil as the outputSettings is not a good idea, and recommendedAudioSettingsForAssetWriter should be the way to go, but 3 seconds initialization time is not acceptable. Here's the full code: RecordingSession.swift from react-native-vision-camera. This gets called from here. I'd appreciate any help, thanks!
Posted
by mrousavy.
Last updated
.
Post not yet marked as solved
1 Replies
1.4k Views
I am drawing stuff onto an off-screen MTLTexture. (using Skia Canvas) At a later point, I want to render this MTLTexture into a CAMetalLayer to display it on the screen. Since I was using Skia for the off-screen drawing operations, my code is quite simple and I don't have the typical Metal setup (no MTLLibrary, MTLRenderPipelineDescriptor, MTLRenderPassDescriptor, MTLRenderEncoder, etc). I now simply want to draw that MTLTexture into a CAMetalLayer, but haven't figured out how to do so simply. This is where I draw my stuff to the MTLTexture _texture (Skia code): - (void) renderNewFrameToCanvas(Frame frame) { if (_skContext == nullptr) { GrContextOptions grContextOptions; _skContext = GrDirectContext::MakeMetal((__bridge void*)_device, // TODO: Use separate command queue for this context? (__bridge void*)_commandQueue, grContextOptions); } @autoreleasepool { // Lock Mutex to block the runLoop from overwriting the _texture std::lock_guard lockGuard(_textureMutex); auto texture = _texture; // Get & Lock the writeable Texture from the Metal Drawable GrMtlTextureInfo fbInfo; fbInfo.fTexture.retain((__bridge void*)texture); GrBackendRenderTarget backendRT(texture.width, texture.height, 1, fbInfo); // Create a Skia Surface from the writable Texture auto skSurface = SkSurface::MakeFromBackendRenderTarget(_skContext.get(), backendRT, kTopLeft_GrSurfaceOrigin, kBGRA_8888_SkColorType, nullptr, nullptr); auto canvas = skSurface->getCanvas(); auto surface = canvas->getSurface(); // Clear anything that's currently on the Texture canvas->clear(SkColors::kBlack); // Converts the Frame to an SkImage - RGB. auto image = SkImageHelpers::convertFrameToSkImage(_skContext.get(), frame); canvas->drawImage(image, 0, 0); // Flush all appended operations on the canvas and commit it to the SkSurface canvas->flush(); // TODO: Do I need to commit? /* id<MTLCommandBuffer> commandBuffer([_commandQueue commandBuffer]); [commandBuffer commit]; */ } } Now, since I have the MTLTexture _texture in memory, I want to draw it to the CAMetalLayer _layer. This is what I have so far: - (void) setup { // I set up a runLoop that calls render() 60 times a second. // [removed to simplify] _renderPassDescriptor = [[MTLRenderPassDescriptor alloc] init]; // Load the compiled Metal shader (PassThrough.metal) auto baseBundle = [NSBundle mainBundle]; auto resourceBundleUrl = [baseBundle URLForResource:@"VisionCamera" withExtension:@"bundle"]; auto resourceBundle = [[NSBundle alloc] initWithURL:resourceBundleUrl]; auto shaderLibraryUrl = [resourceBundle URLForResource:@"PassThrough" withExtension:@"metallib"]; id<MTLLibrary> defaultLibrary = [_device newLibraryWithURL:shaderLibraryUrl error:nil]; id<MTLFunction> vertexFunction = [defaultLibrary newFunctionWithName:@"vertexPassThrough"]; id<MTLFunction> fragmentFunction = [defaultLibrary newFunctionWithName:@"fragmentPassThrough"]; // Create a Pipeline Descriptor that connects the CPU draw operations to the GPU Metal context auto pipelineDescriptor = [[MTLRenderPipelineDescriptor alloc] init]; pipelineDescriptor.label = @"VisionCamera: Frame Texture -> Layer Pipeline"; pipelineDescriptor.vertexFunction = vertexFunction; pipelineDescriptor.fragmentFunction = fragmentFunction; pipelineDescriptor.colorAttachments[0].pixelFormat = MTLPixelFormatBGRA8Unorm; _pipelineState = [_device newRenderPipelineStateWithDescriptor:pipelineDescriptor error:nil]; } - (void) render() { @autoreleasepool { // Blocks until the next Frame is ready (16ms at 60 FPS) auto drawable = [_layer nextDrawable]; std::unique_lock lock(_textureMutex); auto texture = _texture; MTLRenderPassDescriptor* renderPassDescriptor = [[MTLRenderPassDescriptor alloc] init]; renderPassDescriptor.colorAttachments[0].texture = drawable.texture; renderPassDescriptor.colorAttachments[0].loadAction = MTLLoadActionClear; renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(); id<MTLCommandBuffer> commandBuffer([_commandQueue commandBuffer]); auto renderEncoder = [commandBuffer renderCommandEncoderWithDescriptor:renderPassDescriptor]; [renderEncoder setLabel:@"VisionCamera: PreviewView Texture -> Layer"]; [renderEncoder setRenderPipelineState:_pipelineState]; [renderEncoder setFragmentTexture:texture atIndex:0]; [renderEncoder endEncoding]; [commandBuffer presentDrawable:drawable]; [commandBuffer commit]; lock.unlock(); } } And along with that, I have created the PassThrough.metal shader which is just for passing through a texture: #include <metal_stdlib> using namespace metal; // Vertex input/output structure for passing results from vertex shader to fragment shader struct VertexIO { float4 position [[position]]; float2 textureCoord [[user(texturecoord)]]; }; // Vertex shader for a textured quad vertex VertexIO vertexPassThrough(const device packed_float4 *pPosition [[ buffer(0) ]], const device packed_float2 *pTexCoords [[ buffer(1) ]], uint vid [[ vertex_id ]]) { VertexIO outVertex; outVertex.position = pPosition[vid]; outVertex.textureCoord = pTexCoords[vid]; return outVertex; } // Fragment shader for a textured quad fragment half4 fragmentPassThrough(VertexIO inputFragment [[ stage_in ]], texture2d<half> inputTexture [[ texture(0) ]], sampler samplr [[ sampler(0) ]]) { return inputTexture.sample(samplr, inputFragment.textureCoord); } Running this crashes the app with the following exception: validateRenderPassDescriptor:782: failed assertion `RenderPass Descriptor Validation Texture at colorAttachment[0] has usage (0x01) which doesn't specify MTLTextureUsageRenderTarget (0x04) This now raises three questions for me: Do I have to do all of that Metal setting up, packing along the PassThrough.metal shader, render pass stuff, etc just to draw the MTLTexture to the CAMetalLayer? Is there no simpler way? Why is the code above failing? When is the drawing from Skia actually committed to the MTLTexture? Do I need to commit the command buffer (as seen in my TODO)?
Posted
by mrousavy.
Last updated
.
Post not yet marked as solved
1 Replies
1.7k Views
Issue I'm using AVFoundation to implement a Camera that is able to record videos while running special AI processing. Having an AVCaptureMovieFileOutput (for video recording) and a AVCaptureVideoDataOutput (for processing AI) running at the same time is not supported (see https://stackoverflow.com/q/4944083/5281431), so I have decided to use a single AVCaptureVideoDataOutput which is able to record videos to a file while running the AI processing in the same captureOutput(...) callback. To my surprise, doing that drastically increases RAM usage from 58 MB to 187 MB (!!!), and CPU from 3-5% to 7-12% while idle. While actually recording, the RAM goes up even more (260 MB!). I am wondering what I did wrong here, since I disabled all the AI processing and just compared the differences between AVCaptureMovieFileOutput and AVCaptureVideoDataOutput. My code: AVCaptureMovieFileOutput Setup swift if let movieOutput = self.movieOutput { captureSession.removeOutput(movieOutput) } movieOutput = AVCaptureMovieFileOutput() captureSession.addOutput(movieOutput!) Delegate (well there is none, AVCaptureMovieFileOutput handles all that internally) Benchmark When idle, so not recording at all: RAM: 56 MB CPU: 3-5% When recording using AVCaptureMovieFileOutput.startRecording: RAM: 56 MB (how???) CPU: 20-30% AVCaptureVideoDataOutput Setup swift // Video if let videoOutput = self.videoOutput { captureSession.removeOutput(videoOutput) self.videoOutput = nil } videoOutput = AVCaptureVideoDataOutput() videoOutput!.setSampleBufferDelegate(self, queue: videoQueue) videoOutput!.alwaysDiscardsLateVideoFrames = true captureSession.addOutput(videoOutput!) // Audio if let audioOutput = self.audioOutput { captureSession.removeOutput(audioOutput) self.audioOutput = nil } audioOutput = AVCaptureAudioDataOutput() audioOutput!.setSampleBufferDelegate(self, queue: audioQueue) captureSession.addOutput(audioOutput!) Delegate swift extension CameraView: AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate { public final func captureOutput(_ captureOutput: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from _: AVCaptureConnection) { // empty } public final func captureOutput(_ captureOutput: AVCaptureOutput, didDrop buffer: CMSampleBuffer, from _: AVCaptureConnection) { // empty } } yes, they are literally empty methods. My RAM and CPU usage is still that high without doing any work here. Benchmark When idle, so not recording at all: RAM: 151-187 MB CPU: 7-12% When recording using a custom AVAssetWriter: RAM: 260 MB CPU: 64% Why is the AVCaptureMovieFileOutput so much more efficient than an empty AVCaptureVideoDataOutput? Also, why does it's RAM not go up at all when recording, compared to how my AVAssetWriter implementation alone consumes 80 MB? Here's my custom AVAssetWriter implementation: [RecordingSession.swift](https://github.com/cuvent/react-native-vision-camera/blob/frame-processors/ios/RecordingSession.swift), and here's where I call it - https://github.com/cuvent/react-native-vision-camera/blob/a48ca839e93e6199ad731f348e19427774c92821/ios/CameraView%2BRecordVideo.swift#L16-L86. Any help appreciated!
Posted
by mrousavy.
Last updated
.
Post not yet marked as solved
1 Replies
1.4k Views
Hi! I have created a Camera using AVFoundation which is able to record video and audio using AVCaptureVideoDataOutput and AVCaptureAudioDataOutput. I create my capture session, attach all inputs and the video- and audio-data outputs, and the Camera then sits idle. The user is now able to start a video recording. Problem The problem with this is that immediately after I start the capture session, the background music stutters. This is really annoying, since the Camera is the start-screen in our app and I want to delay the stuttering until the Audio is actually needed, which is when the user starts recording a video. I know that this is somehow possible because Snapchat works that way - you open the App and background audio smoothly continues to play. Once you start recording, there is a small stutter on the background music, but the Camera smoothly operates and starts recording once the short stutter is over. My code: func configureSession() { captureSession.beginConfiguration() // Video, Photo and Audio Inputs ... // Video Output ... // Audio Output audioOutput = AVCaptureAudioDataOutput() guard captureSession.canAddOutput(audioOutput!) else { throw CameraError.parameter(.unsupportedOutput(outputDescriptor: "audio-output")) } audioOutput!.setSampleBufferDelegate(self, queue: audioQueue) captureSession.addOutput(audioOutput!) try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, options: [.mixWithOthers, .allowBluetoothA2DP, .defaultToSpeaker, .allowAirPlay]) captureSession.commitConfiguration() } What I tried Delay configuring the AVAudioSession.sharedInstance() I tried to first configure the AVAudioSession.sharedInstance with the category AVAudioSession.Category.playback, and switch to .playAndRecord once I want to start recording audio. This didn't work and the AVCaptureSessionRuntimeError event gets invoked with the Error code -10851, which means kAudioUnitErr_InvalidPropertyValue. I think this means that the AVCaptureAudioDataOutput is not allowed to record from the Audio Session, but I don't event want to do that right now - it should just be idle. Delay adding the AVCaptureAudioDataOutput output I tried to not add the audio output (AVCaptureAudioDataOutput) in the beginning, and only add it "on-demand" once the user starts recording, and while that worked fine for the background music (no stutter when starting, only short stutter once the user starts recording, exactly how I want it), it made the Preview freeze for a short amount of time (because the Capture Session is being reconfigured via beginConfiguration + audio output adding + commitConfiguration) Does anyone know how it's possible to achieve what I'm trying to do here - or how Snapchat does it? Any help appreciated, thanks!
Posted
by mrousavy.
Last updated
.
Post not yet marked as solved
0 Replies
906 Views
I'm trying to create a camera capture session that has the following features: Preview Photo capture Video capture Realtime frame processing (AI) While the first two things are not a problem, I haven't found a way to make the last two separately. Currently, I use a single AVCaptureVideoDataOutput and run the video recording first, then the frame processing in the same function, in the same queue. (see code here - https://github.com/cuvent/react-native-vision-camera/blob/49ae9844da0daf8ce259c2b2f482e1baed4a82c8/ios/CameraView%2BAVCaptureSession.swift#L16-L134 and here - https://github.com/cuvent/react-native-vision-camera/blob/49ae9844da0daf8ce259c2b2f482e1baed4a82c8/ios/CameraView%2BRecordVideo.swift#L121-L146) The only problem with this is that the video capture captures 4k video, and I don't really want the frame processor to receive 4k buffers as that is going to be very slow and blocks the video recording (frame drops). Ideally I want to create one AVCaptureDataOutput for 4k video recording, and another one that receives frames in a lower (preview?) resolution - but you cannot use two AVCaptureDataOutputs in the same capture session. I thought maybe I could "hook into" the Preview layer to receive the CMSampleBuffers from there, just like the captureOutput(...) func, since those are in preview-sized resolutions, does anyone know if that is somehow possible?
Posted
by mrousavy.
Last updated
.
Post marked as solved
1 Replies
4.4k Views
I'm developing a Camera App which uses the presentationDimensions(...) API - https://developer.apple.com/documentation/coremedia/cmformatdescription/3242280-presentationdimensions?changes=latest_maj_8__8: swift if #available(iOS 13.0, *) { let leftVideo = self.formatDescription.presentationDimensions() let rightVideo = other.formatDescription.presentationDimensions() // ... } Now when I try to build the project, I get the following errors: Undefined symbol: (extension in CoreMedia):__C.CMFormatDescriptionRef.presentationDimensions(usePixelAspectRatio: Swift.Bool, useCleanAperture: Swift.Bool) - __C.CGSize Last Xcode log: Xcode error output log - https://developer.apple.com/forums/content/attachment/5a7ddf25-7db6-4e21-9a20-18b09a84f6c9 My .pbxproj: project.pbxproj - https://developer.apple.com/forums/content/attachment/1b325213-4bcc-46fb-b4c5-d5775a425f4e Note that when I remove those calls to presentationDimensions everything works fine. Can anyone help me out here? The API has been available since iOS 13.0, and even if my iOS deployment target is 11.0 I should still be able to build it, no?
Posted
by mrousavy.
Last updated
.