MetalKit

RSS for tag

Render graphics in a standard Metal view, load textures from many sources, and work efficiently with models provided by Model I/O using MetalKit.

Posts under MetalKit tag

47 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Usage of Touch Gesture to acquire DepthData of a specific point using LiDAR
Hello! The Aim of my project is as specified in the title and the code I am currently trying to modify uses CVPixelBufferGetBaseAddress to acquire the DepthData using LiDAR. For some context, I made use of the available "Capturing depth using the LiDAR camera" Documentation using AVFoundation and edited code after referring a few Q&A on Developer Forums. I have a few doubts and would be grateful if I could get insights or a push in the right direction. Regarding the LiDAR DepthData: Where is the Origin(0,0) and In what order is it saved? (Basically, how do the row and column correspond to the real world scenario) How do I add a touch gesture to fill in the values of X&Y for "distanceAtXYPoint" so that I can acquire the Depth data on user touch input rather than in real-time. The function for reference : //new function to show the depth data value in meters func depthDataOutput(syncedDepthData: AVCaptureSynchronizedDepthData) { let depthData = syncedDepthData.depthData.converting(toDepthDataType: kCVPixelFormatType_DepthFloat16) let depthMapWidth = CVPixelBufferGetWidthOfPlane(depthData.depthDataMap, 0) let depthMapHeight = CVPixelBufferGetHeightOfPlane(depthData.depthDataMap, 0) CVPixelBufferLockBaseAddress(depthData.depthDataMap, .readOnly) if let rowData = CVPixelBufferGetBaseAddress(depthData.depthDataMap)?.assumingMemoryBound(to: Float16.self) { //need to find a way to get the specific depthpoint (using row data) on touch gesture. //currently use the depth point when row&column = 0 let depthPoint = rowData[0] for y in 0...depthMapHeight-1 { var distancesLine = [Float16]() for x in 0...depthMapWidth-1 { let distanceAtXYPoint = rowData[y * depthMapWidth + x] } } print("⭐️Depth value of (0,0) point in meters: \(depthPoint)") } CVPixelBufferUnlockBaseAddress(depthData.depthDataMap, .readOnly) } The current real-time console log output is as shown below Also a slight concern is that the current output at (0,0) shows a value greater than 1m at times even when the real distance is probably a few cm. Any experience and countermeasures on this also would be greatly helpful. Thanks in advance.
0
0
531
Dec ’23
MTLTexture streaming to app directory
For better memory usage when working with MTLTextures (editing + displaying in render passes, compute shaders, etc.) is it possible to save the texture to the app's Documents folder, and then use an UnsafeMutablePointer to access/modify the contents of the texture before displaying in a render pass? And would this be performant (i.e 60fps)? That way the texture won't be directly in memory all the time, but the contents can still be edited and displayed when needed.
1
0
801
Nov ’23
AVSampleBufferDisplayLayer vs CAMetalLayer for displaying HDR
I have been using MTKView to display CVPixelBuffer from the camera. I use so many options to configure color space of the MTKView/CAMetalLayer that may be needed to tonemap content to the display (CAEDRMetadata for instance). If however I use AVSampleBufferDisplayLayer, there are not many configuration options for color matching. I believe AVSampleBufferDisplayLayer uses pixel buffer attachments to determine the native color space of the input image and does the tone mapping automatically. Does AVSampleBufferDisplayLayer have any limitations compared to MTKView, or both can be used without any compromise on functionality?
0
0
714
Nov ’23
Frames out of order using AVAssetWriter
We are using AVAssetWriter to write videos using both HEVC and H.264 encoding. Occasionally, we get reports of choppy footage in which frames appear out of order when played back on a Mac (QuickTime) or iOS device (stock Photos app). This occurs extremely unpredictably, often not starting until 20+ minutes of filming, but occasionally happening as soon as filming starts. Interestingly, users have reported the issue goes away while editing or viewing on a different platform (e.g. Linux) or in the built-in Google Drive player, but comes back as soon as the video is exported or downloaded again. When this occurs in an HEVC file, converting to H.264 seems to resolve it. I haven't found a similar fix for H.264 files. I suspect an AVAssetWriter encoding issue but haven't been able to uncover the source. Running a stream analyzer on HEVC files with this issue reveals the following error: Short-term reference picture with POC = [some number] seems to have been removed or not correctly decoded. However, running a stream analyzer on H.264 files with the same playback issue seems to show nothing wrong. At a high level, our video pipeline looks something like this: Grab a sample buffer in captureOutput(_ captureOutput: AVCaptureOutput!, didOutputVideoSampleBuffer sampleBuffer: CMSampleBuffer!) Perform some Metal rendering on that buffer Pass the resulting CVPixelBuffer to the AVAssetWriterInputPixelBufferAdaptor associated with our AVAssetWriter Example files can be found here: https://drive.google.com/drive/folders/1OjDZ3XaC-ubD5hyDiNvMQGl2NVqZbWnR?usp=sharing This includes a video file suffering this issue, the same file fixed after converting to mp4, and a screen recording of the distorted playback in QuickTime. Can anyone help point me in the right direction to solving this issue? I can provide more details as necessary.
4
2
1.5k
Nov ’23