Posts

Post not yet marked as solved
0 Replies
255 Views
I'm trying to use MTLIO Resource Loading in XCode 15.1 b3, but MTLIOCommandQueue, Buffer, etc, come up as undefined types. The Apple sample MetalFastResourceLoading seems to work. But creating a new project from template doesn't recognize the symbols. Anyone see similar? Digging in, it seems the MTLIOCommandQueue.h file in Simulator 17.2 only contains #import <Metal/MTLDefines.h> #import <Metal/MTLDevice.h> Whereas the working sample that links to 14.x contains lots of code. Also, the iOS 17.2 Framework ALSO contains headers with code. However, building for device also comes up with undefined symbols. In addition, the documentation for the type https://developer.apple.com/documentation/metal/mtliocommandqueue lists support for iOS 16+, Vision Beta, etc. Does anyone know if earlier betas work?
Posted
by piezas.
Last updated
.
Post not yet marked as solved
3 Replies
787 Views
Has Apple worked out how WebXR authored projects in Safari operate with VisionOS? Quest has support already. And I imagine many cross platform experiences (especially for professional markets where the apps are on windows through web) would be serve well with this. Is there documentation for this?
Posted
by piezas.
Last updated
.
Post not yet marked as solved
5 Replies
1.5k Views
So I am completely unable to follow the instructions in the DriverKit/UserClient sample app and run on MACOS successfully. XCode 14.1. The instructions to sign locally don't match to this version of Xcode. The client crashes when it has an entitlement for user client access. However, I can get the same code to sign and run on an IPAD without requesting additional security entitlements. If this is related to entitlements for MACOS, why the inconsistency with IPADOS? The same security considerations should be in play. If it is possible to run on IPADOS without additional granted security, why not on MACOS?
Posted
by piezas.
Last updated
.
Post not yet marked as solved
2 Replies
1.9k Views
Does anyone know if it's possible to control WHEN the video captureoutput loop collects the camera image? I'd like to synchronize multiple iOS cameras to within a certain sub ms precision for an application that covers fast action. And this particular use would have to sync the images within a certain tolerance of time under 1ms. Think of it as a network based gen-lock signal.Thanks,
Posted
by piezas.
Last updated
.
Post not yet marked as solved
1 Replies
860 Views
Does anyone have an example of streaming video data from a user client to a client app? I have successfully set up a DriverKit USB hardware driver that commands a device to successfully fill the same IOBufferMemoryDescriptor at 30fps with new data (verified in driver). I have also managed to map this buffer to the application using CopyClientMemoryForType in the user client and IOConnectMapMemory64 in the app. I have also confirmed the data is the intended image data. What I CAN NOT do is see updates in the mapped memory in the app. I map and unmap the data, but the contents don't change. No matter how many times I map, CopyClientMemoryForType is called once. How should changes to the underlying dext space memory be reflected/synchronized to the app? Can I not share a single buffer (4-12MB, ringed) and synchronize updates?
Posted
by piezas.
Last updated
.
Post not yet marked as solved
4 Replies
1.5k Views
I'm trying to figure out if Metal compute is a viable solution for an application. I'm finding that even on an iPhone6sPlus with A9, an empty compute encoder executing in loop never beats 2.5ms in execution. The simplest test I could concoct was:- (void)runtest { id &lt;MTLCommandBuffer&gt; commandBuffer = [_commandQueue commandBuffer]; id &lt;MTLComputeCommandEncoder&gt; computeEncoder = [commandBuffer computeCommandEncoder]; [computeEncoder endEncoding]; [commandBuffer addCompletedHandler:^(id&lt;MTLCommandBuffer&gt; _Nonnull) { runcounter ++ ; NSLog(@"count: %d", runcounter); [self runtest]; }]; [commandBuffer commit];}The loop was run for 10seconds. I did a simple division of runs/seconds. No other work was being done by the app (display loop, etc). The breakdown was 2.5ms between iterations.For comparison, something like a NEON sum of 1024 numbers avg'd 0.04 ms and of course executed immediately.I realize this doesn't mean it wastes 2.5ms of resources and could just be scheduling, but for very low latency app requirements (camera processing) it does mean that NEON can process immediately but Metal can not. Can someone confirm this finding or correct the test? Thanks.
Posted
by piezas.
Last updated
.