Post

Replies

Boosts

Views

Activity

RealityKit visualize the virtual depth texture from post-process callback
I am using RealityKit and ARView PostProcessContext to get the sourceDepthTexture of the current virtual scene in RealityKit, using .nonAR camera mode. My experience with Metal is limited to RealityKit GeometryModifier and SurfaceShader for CustomMaterial, but I am excited to learn more! Having studied the Underwater sample code I have a general idea of how I want to explore the capabilities of a proper post processing pipeline in my RealityKit project, but right now I just want to visualize this MTLTexture to see what the virtual depth of the scene looks like. Here’s my current approach, trying to create a depth UIImage from the context sourceDepthTexture: func postProcess(context: ARView.PostProcessContext) { let depthTexture = context.sourceDepthTexture var uiImage: UIImage? // or cg/ci if processPost { print("#P Process: Post Processs BLIT") // UIImage from MTLTexture uiImage = try createDepthUIImage(from: depthTexture) let blitEncoder = context.commandBuffer.makeBlitCommandEncoder() blitEncoder?.copy(from: context.sourceColorTexture, to: context.targetColorTexture) blitEncoder?.endEncoding() getPostProcessed() } else { print("#P No Process: Pass-Through") let blitEncoder = context.commandBuffer.makeBlitCommandEncoder() blitEncoder?.copy(from: context.sourceColorTexture, to: context.targetColorTexture) blitEncoder?.endEncoding() } } func createUIImage(from metalTexture: MTLTexture) throws -> UIImage { guard let device = MTLCreateSystemDefaultDevice() else { throw CIMError.noDefaultDevice } let descriptor = MTLTextureDescriptor.texture2DDescriptor( pixelFormat: .depth32Float_stencil8, width: metalTexture.width, height: metalTexture.height, mipmapped: false) descriptor.usage = [.shaderWrite, .shaderRead] guard let texture = device.makeTexture(descriptor: descriptor) else { throw NSError(domain: "Failed to create Metal texture", code: -1, userInfo: nil) } // Blit! let commandQueue = device.makeCommandQueue() let commandBuffer = commandQueue?.makeCommandBuffer() let blitEncorder = commandBuffer?.makeBlitCommandEncoder() blitEncorder?.copy(from: metalTexture, to: texture) blitEncorder?.endEncoding() commandBuffer?.commit() // Raw pixel bytes let bytesPerRow = 4 * texture.width let dataSize = texture.height * bytesPerRow var bytes = [UInt8](repeating: 0, count: dataSize) //var depthData = [Float](repeating: 0, count: dataSize) bytes.withUnsafeMutableBytes { bytesPtr in texture.getBytes( bytesPtr.baseAddress!, bytesPerRow: bytesPerRow, from: .init(origin: .init(), size: .init(width: texture.width, height: texture.height, depth: 1)), mipmapLevel: 0 ) } // CGDataProvider from the raw bytes let dataProvider = CGDataProvider(data: Data(bytes: bytes, count: bytes.count) as CFData) // CGImage from the data provider let cgImage = CGImage(width: texture.width, height: texture.height, bitsPerComponent: 8, bitsPerPixel: 32, bytesPerRow: bytesPerRow, space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue), provider: dataProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent) // Return as UIImage return UIImage(cgImage: cgImage!) } I have hacked together the ‘createUIImage’ function with generative aid and online research to provide some visual feedback, but it looks like I am converting the depth values incorrectly — or somehow tapping into the stencil component of the pixels in the texture. Either way I am out of my depth, and would love some help. Ideally, I would like to produce a grayscale depth image, but really any guidance on how I can visualize the depth would be greatly appreciated. As you can see from the magnified view on the right, there are some artifacts or pixels that are processed differently than the core stencil. The empty background is transparent in the image as expected.
0
0
560
Feb ’24
Trouble encoding CapturedRoomData with JSONEncoder (used to work)
Hello, I used to be able to encode CapturedRoomData using JSONEncoder with the code below: // Encode CapturedRoomData func encodeRoomData(_ roomData: CapturedRoomData) -> Data? { print("#ECRD1 - Data: \(roomData)") do { let encodedRoom = try JSONEncoder().encode(roomData) print("#ECRD2 - Encoded: \(encodedRoom.description)") return encodedRoom } catch { print("#ECRD3 - Failed with error: \(error)") } return nil } A few weeks ago I noticed this approach is no longer working. The encoding fails and I get the following error printed: #ECRD3 - Failed with error: invalidValue(RoomPlan.CapturedRoomData, Swift.EncodingError.Context(codingPath: [], debugDescription: "Invalid data", underlyingError: nil)) Can anyone help me find the root of this problem? For reference, here’s what the printed CapturedRoomData looks like (with the keyframes omitted): #ECRD1 - Data: CapturedRoomData(keyframes: [...], coreAsset: <RSAsset: 0x283988bd0>, arFrameReferenceOriginTransform: simd_float4x4([[0.9995456, 0.0, 0.030147359, 0.0], [0.0, 1.0, 0.0, 0.0], [-0.030147359, 0.0, 0.9995456, 0.0], [0.38664898, 0.93699455, 0.38685757, 1.0]]))
0
0
446
Mar ’24
Issue storing SIMD3<Float> (and other simd’s) in a SwiftData model
I'm encountering an issue when trying to store a SIMD3<Float> in a SwiftData model. Since SIMD3<Float> already conforms to Codable, I expected it to work. However, attempting to store a single SIMD3<Float> crashes with the following error: Fatal error: Unexpected property within Persisted Struct/Enum: Builtin.Vec4xFPIEEE32 Interestingly, storing an array of vectors, [SIMD3<Float>], works perfectly fine. The issue only arises when trying to store a single SIMD3<Float>. I’m not looking for a workaround (I can break the vector into individual floats in a custom codable struct to get by) , but I’d like to understand why storing a codable SIMD3<Float> in SwiftData results in this crash. Is this a limitation of SwiftData, or is there something I’m missing about how vectors are handled? Any insights would be greatly appreciated!
3
0
330
Sep ’24