Posts

Post not yet marked as solved
1 Replies
684 Views
I think I'm looking for a way to check if a video settings key is valid, so I can exclude it if it isn't. But I'm more curious why some keys are valid sometimes, inconsistently. Let me explain. With my macOS app, I'm trying to write a video with compression quality set, like so let videoWriter = AVAssetWriterInput( mediaType: .video, outputSettings: [ AVVideoWidthKey: 1000, AVVideoHeightKey: 1000, AVVideoCodecKey: AVVideoCodecType.h264, AVVideoCompressionPropertiesKey: [ AVVideoQualityKey: 0.4 ] ] ) It works on my M1Max (13.4, 13.5), Intel Mac (12.0.1), but for one (for now?) of my users (on 13.5/Intel, Macbook16,1) this line throws an exception and crashes the app. This is the output *** -[AVAssetWriterInput initWithMediaType:outputSettings:sourceFormatHint:] Compression property Quality is not supported for video codec type avc1 Reproducing sample app This tiny app reproduces the crash for my user (but still, not for me) https://github.com/mortenjust/CompressionCrashTest/blob/main/CompressionCrashTest/ContentView.swift What I tried I tried looking into catching objc exceptions (from AV Foundation) with Swift via a wrapper, but it looks like it's a bad idea, and not recommended by Apple. I also tried changing the codec to .hevc and that also crashes, saying that Quality is not a supported key. I tried asking my user to reboot, no effect. I tried exporting a video with compression quality 0.1 on my Mac to verify that it's a valid key, and the output video was compressed, small file size and visually distorted. I tried upgrading from 13.4 to 13.5 to match his specs, but it still doesn't crash for me. I really wish there was a way to validate a video settings dictionary without crashing. I found AVCaptureMovieFileOutput / supportedOutputSettingsKeys(for:) that vaguely resembles what I need, but it's made for recording from a camera.
Posted Last updated
.
Post not yet marked as solved
1 Replies
545 Views
Understanding SCNCameraController TLDR; I'm able to create my own subclassed camera controller, but it only works for rotation, not translation. I made a demo repo here. Background I want to use SceneKit's camera controller to drive my scene's camera. The reason I want to subclass it is that my camera is on a rig where I apply rotation to the rig and translation to the camera. I do that because I animate the camera, and applying both translation and rotation to the camera node doesn't create the animation I want. Setting up Instantiate my own SCNCameraController Set its pointofView to my scene's pointOfView (or its parent node I guess) Using the camera controller We now want the new camera controller to drive the scene. When interactions begin (e.g. mouseDown), call beginInteraction(_ location: CGPoint, withViewport viewport: CGSize) When interactions update and end call the corresponding functions on the camera controller Actual behavior It works when I begin/update/end interactions from mouse down events. It ignores any other event types, like magnification, scrollwheel, which work in e.g. the SceneKit Editor in Xcode. See MySCNView.swift in the repo for a demo. By overriding the camera controller's rotate function, I can see that it is called with deltas. This is great. But when I override translateInCameraSpaceBy my print statements don't appear and the scene doesn't translate. Expected behavior I expected SCNCameraController to also apply translations and rolls to the pointOfView by inspecting the currentEvent and figuring out what to do. I'm inclined to think that I'm supposed to call translateInCameraSpaceBy myself, but that seems inconsistent with how Begin/Continue/End interaction seems to call rotate. Demo repo: https://github.com/mortenjust/Camera-Control-Demo
Posted Last updated
.
Post not yet marked as solved
0 Replies
742 Views
In my Mac app, I'm mirroring my iOS device to a Scenekit Object's material. This works great on my Intel Mac, but not on my M1 Mac Mini. Here's how I do it First, make sure connected iOS devices can be discovered as AVCaptureDevices with this snippet - https://github.com/mortenjust/Capture-device/blob/main/Capture%20device/ViewController.swift#L72 Then, add the device's input to the session and grab the layer from the session. let device = AVCaptureDevice.devices(for: .muxed).first! input = try! AVCaptureDeviceInput(device: device) session.addInput(input!) let layer = AVCaptureVideoPreviewLayer(session: session) Finally, set the layer as the material's diffuse contents boxNode.geometry?.firstMaterial?.diffuse.contents = layer What happens On Intel, the device is now mirrored on the material. On M1, it crashes with a Bad Access exception. I've tried holding on to all variables by setting them as class properties, but that was not it. I've tried mirroring to an NSView's layer, and that works fine. You can check out the entire project here - https://github.com/mortenjust/Capture-device/tree/main/Capture%20device
Posted Last updated
.
Post marked as solved
5 Replies
1.9k Views
I'm trying to export an SCNScene (with animations) to a video file. It's working, but the result is jagged, so I want to add jittering to the scene I'm rendering. I'm adding one line of code that makes this Apple sample code - https://developer.apple.com/documentation/avfoundation/media_playback_and_selection/using_hevc_video_with_alpha crash. Here's a direct link to the line I added - line 47 - https://github.com/mortenjust/hevc-with-alpha/blob/main/HEVC-Videos-With-Alpha-AssetWriting/HEVC-Videos-With-Alpha-AssetWriting/AppDelegate.swift#L47 The line in question is renderer.isJitteringEnabled = true When I remove the line, it works fine. When I add the line, the app crashes with an assertion error saying that the pixel buffer's pixel format and the metal texture's pixel format don't match. [MTLDebugRenderCommandEncoder validateFramebufferWithRenderPipelineState:]:1288: failed assertion `Framebuffer With Render Pipeline State Validation For color attachment 0, the render pipeline's pixelFormat (MTLPixelFormatRGBA16Float) does not match the framebuffer's pixelFormat (MTLPixelFormatBGRA8Unorm_sRGB). For color attachment 1, the renderPipelineState pixelFormat must be MTLPixelFormatInvalid, as no texture is set. So I tried setting the Metal Texture's pixel format like this let pixelFormat = MTLPixelFormat.rgba16Uint But now I'm getting another error I don't understand _mtlValidateStrideTextureParameters:1656: failed assertion `Texture Descriptor Validation IOSurface texture: bytesPerRow (5120) must be greater or equal to (10240) bytes
Posted Last updated
.