Elsewhere in the forum under a discussion of WWDC 23 "Discover Metal for Spatial Computing" there is a link to some GitHub sample code provided by indie developers.
In particular, this sample illustrates how to pass eye-specific parameters to a shader:
github.com/musesum/SpatialMetal2
My recommendation is to modify the Uniforms struct in ShaderTypes.h to include whatever eye specific data you require:
struct Uniforms {
matrix_float4x4 projection;
matrix_float4x4 viewModel;
};
struct UniformEyes {
Uniforms eye[2];
};
Unfortunately this sample code uses parallel declarations of these structs which, to my knowledge, is not the correct way to pass structs from Swift to Metal. Structs that are destined for use in Metal must be declared in the C-language and imported into Swift using the Bridging header. Swift can reorder struct members and may use different rules for padding and alignment.
To that end your solution should probably be based off the Xcode template with modifications which have the same functionality as this sample.
Post
Replies
Boosts
Views
Activity
If you arrived here from warrenm sample code:
metal-by-example/metal-spatial-rendering
Here's an updated fork with the above mentioned changes:
https://github.com/Verlet/metal-spatial-rendering/
I have a simple SceneKit app that uses SCNRenderer to render into an offscreen Metal texture. Working fine on macOS and iPhone OS but on iPad OS it runs with the above error message as well. If it is running under Xcode debugger the view appears and renders correctly, but when running on the device without the debugger it just crashes.
com.apple.scenekit.renderingQueue.SCNView0x107b05380
Thread 1 Crashed:
0 libsystem_kernel.dylib 0x1c57f7558 __pthread_kill + 8
1 libsystem_pthread.dylib 0x1e62b1118 pthread_kill + 268
2 libsystem_c.dylib 0x18e8ea178 abort + 180
3 libsystem_c.dylib 0x18e9420a4 __assert_rtn + 272
4 Metal 0x18217921c MTLReportFailure.cold.1 + 48
5 Metal 0x1821580b8 MTLReportFailure + 464
6 Metal 0x18206dab8 -[_MTLCommandBuffer addCompletedHandler:] + 128
7 OrbEnergy 0x104500544 MTLOffscreenContext.offscreenRender(withTextureTarget:clearColor:waitUntilCompleted:sceneRender:) + 2984
8 OrbEnergy 0x1045077d4 OrbSCNScene2Metal2MetalViewController.renderOffscreenSCNRenderer(atRenderTime:) + 544
9 OrbEnergy 0x104507a04 OrbSCNScene2Metal2MetalViewController.renderer(_:updateAtTime:) + 68
10 OrbEnergy 0x104507a4c @objc OrbSCNScene2Metal2MetalViewController.renderer(_:updateAtTime:) + 60
11 SceneKit 0x1d44f048c -[SCNRenderer _update:] + 360
12 SceneKit 0x1d44f0014 -[SCNRenderer _drawSceneWithNewRenderer:] + 156
13 SceneKit 0x1d44eff14 -[SCNRenderer _drawScene:] + 44
14 SceneKit 0x1d44e90fc -[SCNRenderer _drawAtTime:] + 504
15 SceneKit 0x1d44e8cdc -[SCNView _drawAtTime:] + 372
16 SceneKit 0x1d4606b84 _
The exact error message is:
[API] Failed to create 0x88 image slot (alpha=1 wide=1) (client=0x18960791) [0x5 (os/kern) failure]
Recommend that any CVPIxelBufferRef objects that you send via CMSampleBufferRef objects be backed by IOSurface (specify in pixel buffer properties) which will prevent unnecessary copies from being made when the frames are sent to the client. Probably most important on Intel machines with discrete GPUs.
For some code to query the device list, open streams and retrieve property values, this sample may be useful:
https://github.com/bangnoise/cmiotest
If you see this problem in your extension using Photo Booth as a client try creating the CVPixelBufferRef as an individual allocation rather than from a pool.
Answering my own question, sorta kinda:
Deep in the heart of AVFoundation.framework in AVCaptureVideoDataOutput.h we find the following comment:
@method captureOutput:didOutputSampleBuffer:fromConnection:
@abstract
Called whenever an AVCaptureVideoDataOutput instance outputs a new video frame.
Note that to maintain optimal performance, some sample buffers directly reference pools of memory that may need to be reused by the device system and other capture inputs. This is frequently the case for uncompressed device native capture where memory blocks are copied as little as possible. If multiple sample buffers reference such pools of memory for too long, inputs will no longer be able to copy new samples into memory and those samples will be dropped.
It is my belief that Photo Booth (not unlike my own test client) is passing the underlying CVPixelBufferRef (obtained from CMSampleBufferGetImageBuffer) directly into a [CIImage imageWithCVPixelBuffer:] (or similar) and then passing that CIImage into a CIFilter graph for rendering to a CIContext. Or possibly doing this indirectly via an inputKey on QCRenderer.
If Photo Booth is using CMSampleBufferRefs in this way (i.e. not necessarily following the above advice) then I think it is safe to assume there may be a wide variety of other camera apps which also hold on to CMSampleBuffers obtained from a CMIO extension for arbitrary periods of time.
Contrary to the CMIO extension sample code and the advice in the AVCaptureVideoDataOutput.h header, it may be best to allocate your camera frame buffers individually rather than from a pool. Or if your pool runs out of buffers, have a failsafe path that vends individual buffers until the pool is safe for diving.
I am quite familiar with the older DAL architecture, having built a large number of camera plug-ins dating back to 2011. And before that also VDIG components which were the predecessor to DAL. I am not aware of any API in the C-language CoreMediaIO framework that restricts scalar property data to a specific value range. It's possible that this is part of some kind of validation layer new to CMIO Extensions aimed at preventing invalid values from being sent to your custom properties.
The available property set/get methods (such as you're using) are declared in the CoreMediaIO.framework CMIOHardwareObject.h
I believe these are the only functions for accessing properties from the client side. Try sending an out-of-range value to your property declared with max and min and see if something pops up in the os_log
If you have a more complex property data type (such as your ranged values), my suggestion is to serialize it into a NSDictionary and then serialize the NSDictionary into an NSData object. Pass the NSData over the custom property connection. You could also use NSKeyedArchiver to flatten (into NSData) an arbitrary NSObject-derived class conforming to NSCoding. Or JSON data serialized in an NSString.
In my testing, older DAL plugins are already completely deprecated on MacOS Ventura (the WWDC presentation by Brad Ford says Ventura will be the last OS to support DAL). But it can be helpful to have a plugin built with the legacy DAL system so you can see exactly how the property data is communicated in the old system before trying to migrate to the new extension system. Monterey appears to be the last version of MacOS to support DAL CFPlugIn components and as such, probably the preferred OS for developing CMIO solutions.
I would recommend against using this sample code (below) for production because there are some serious race conditions in this code (ie. when multiple AVCaptureSession instances access the 'object property store' without any locks or queues. But for getting the gist of property data flow, this will give you both sides of the equation within a single process that Xcode can easily debug:
https://github.com/johnboiles/coremediaio-dal-minimal-example
Once you have it completely grokked, then migrate to the new CMIO Extension property system.
There's a Swift adaptation of johnboiles sample code that's even more heinous because it pulls a Swift runtime into the host process - and that's an exciting party if the host was built with a different version of Swift. But if you're just using it for scaffolding, it may serve your needs.
Annotating this thread subsequent to the transition to Apple Silicon, which is basically complete at the time of this writing. I think the methodology proposed at the top of this discussion is a workable and effective strategy for dealing with this problem which is going to become more and more pervasive. Many new SDKs from Apple will be thread-safe and capable of generating KVO notifications on any thread or queue. However, I think it unlikely at AppKit and UIKit will be thread safe. And there's the challenge of supporting the widest array of Mac hardware.
The Objective-C runtime already has solutions for this problem which date back to a technology called Distributed Objects. Not much has been said about this tech for a long while because security concerns promoted XPC to the foreground but XPC doesn't really provide the same functionality.
The point here is that the NSProxy NSInvocation classes and patterns can be used to "remote" almost any Objective-C method call, including over thread boundaries, process boundaries and to remote machines on the LAN or interweb. Check out NSDistantObject for some perspective.
You can build a controller layer whose sole purpose is to proxy all KVO notifications onto the main thread to AppKit.
Take this sample code from the archive for example:
https://developer.apple.com/library/archive/samplecode/AVRecorder/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011004
I have refactored and referred to this sample several times; it's an excellent sample. But as of 2023, the KVO bindings from the UI are not properly working. Exceptions are being thrown and KVO notifications lost, leading to indeterminate states of the UI. Maybe these are bugs in AppKit that will be remedied (sometime in the future).
However, I was easily able to solve the problems with this sample by building a controller layer between AppKit and AVCaptureDevice et al. This was before I found NSInvocation and basically I am dispatching to the main thread. My solution is just a simple proxy object that forwards all the valueForKeyPath type methods to the target objects (I have one controller bound to all the various AVCaptureDevice type key paths). It's a very simple class and has restored this sample code to its original lustre and glory. But it could be even simpler next time I revisit the code:
For my next Cocoa nightmare I dove deeper into NSInvocation and learned that you can completely remote an entire class with just four Objective-C methods. Check out the docs for methodSignatureForSelector: and go down the rabbit hole:
from NSInvocation:
+ (NSInvocation *)invocationWithMethodSignature:(NSMethodSignature *)sig;
- (void)invokeWithTarget:(id)target;
- (void)forwardInvocation:(NSInvocation *)invocation;
- (id)forwardingTargetForSelector:(SEL)methodSelector;`
You'll get warnings from a modern Objective-C compiler so declare the exposed keypaths/properties as usual and mark them as @dynamic so the compiler doesn't synthesize methods for them. Once you get to googling NSInvocation and any of the four methods listed above, I think you'll find much has been written on this subject going back to Panther and even OpenStep.
https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/DistrObjects/DistrObjects.html
Getting this os_log message sporadically while running QuickTime Player only (record ready mode) and switching back and forth between two different CMIO Extensions.
__CMIO_Unit_Input_HAL.cpp:1724:DoInputRenderCallback Dropping 512 frames, because we're out of buffers
__
os_log message after sink stream stopped and started creating a gap in the CMSampleTimingInfo:
CMIO_Unit_Synchronizer_Video.cpp:1256:SyncUsingIntegralFrameTiming creating discontinuity because the timebase jumped
This would seem to imply that CMIO automagically adapts CMIOExtensionStreamDiscontinuityFlags if it detects that the flags passed in are incorrect (i.e. there is a discontinuity even if the flag passed was CMIOExtensionStreamDiscontinuityFlagNone - which is what my code is doing at present).
I modified my code to update the CMSampleTimingInfo of the CMSampleBuffers as they pass through the extension as per this post:
https://developer.apple.com/forums/thread/725481
and no longer seeing this message:
CMIO_Unit_Synchronizer_Video.cpp:1435:SyncUsingIntegralFrameTiming observing getting frames too quickly by a lot
Thanks for this tip - after reading this I looked at my code and I was propagating the sampleTimingInfo that originated in the client app all the way through because, well, that seemed like the smart thing to do. I, too, sometimes see strange behaviour and I'm hoping this will make it more robust.
I haven't specifically tried it but you should be able to serialize any NSCoding object into your NSData and then de-serialize it in the client extension.
You can keep a counter of the client connection and disconnection events.