Posts

Post not yet marked as solved
0 Replies
155 Views
I am trying to sync up the selected tab bar item once a user presses back and navigates back in the web wrapper iOS app. Here is how the app works: When a user logs in, they are presented with the Home Screen. To navigate, the select a tab item on the tab bar. This tells webkit to redirect to that URL and then that webpage is shown in the app (wrapped). A back button also appears. When a user presses the back button, the webView.goBack() method sends the user back on in the webView.backForwardList. What happens then is that the user is now on the previous page, however the tab bar has not updated. In other words, the user is back on the first tab but the second tab is highlighted. I tried fixing this by telling the tab bar controller what item to select, however this creates a state malfunction because it also tells the webView to navigate to that url again, so in fact this means that the user isn't going back through web history, but actually is going forward while simulating going back. I need to be able to highlight the tab bar items that sync up with the page that we have gone back to without selecting it. Does anyone have an idea for how to manually highlight or select a tab bar item without actually navigating to that tab bar item? Thank you!
Posted Last updated
.
Post not yet marked as solved
1 Replies
857 Views
Hi! Like in the title, I am using AVMutableVideoComposition and AVAssetExportSession to remove the background of the video on export. I'll remove the background with Vision/ML predictions. I am having issues utilizing AVVideoCompositing and AVVideoCompositionInstructionProtocol and AVAssetExportSession. Does anyone have any more up-to-date resources available for examples?
Posted Last updated
.
Post not yet marked as solved
0 Replies
816 Views
I'm trying to cast the positions of points from an image to the location of a SCNNode using SCNNodeRendererDelegate during an ARSession. I have the node in the scene and I am sending uniforms to the renderer using the arguments from renderNode(_ node: SCNNode, renderer: SCNRenderer, arguments: [String : Any]) let modelTransform = arguments["kModelTransform"] as! SCNMatrix4         let viewTransform = arguments["kViewTransform"] as! SCNMatrix4         let modelViewTransform = arguments["kModelViewTransform"] as! SCNMatrix4         let modelViewProjectionTransform = arguments["kModelViewProjectionTransform"] as! SCNMatrix4         let projectionTransform = arguments["kProjectionTransform"] as! SCNMatrix4         let normalsTransform = arguments["kNormalTransform"] as! SCNMatrix4 In the vertex shader, I calculate the normals using intrinsics = session.currentFrame!.camera.intrinsics in uint2 pos; // specified in pixel coordinates, normalizing?     pos.y = vertexID / depthTexture.get_width();     pos.x = vertexID % depthTexture.get_width();          float depthMultiplier = 100.0f;     float depth = depthTexture.read(pos).x * depthMultiplier;          float xrw = (pos.x - cameraIntrinsics[2][0]) * depth / cameraIntrinsics[0][0];     float yrw = (pos.y - cameraIntrinsics[2][1]) * depth / cameraIntrinsics[1][1];          float4 xyzw = { xrw, yrw, depth, 1.f }; My goal is to calculate the clip space position for each vertex using the node uniforms. So I've been multiplying the model view projection matrix a number of ways, but almost every time, the points are either skewed on the image plane, or if projected properly, don't adhere to the position of the modelTransform that I pass in (ie. when I raycast out and get a transform, set the node there, then use the node's renderer callback to grab its transform to pass into the vertex shader). What transform matrices should I multiply by the vertex? I am using the session.currentFrame?.camera to get the camera.viewMatrix(for: .portrait). But should I use the viewTransform matrix from the node instead? I also get the projection matrix from camera.projectionMatrix(for: .portrait, viewportSize: renderer.currentViewport.size, zNear: CGFloat(znear) , zFar: CGFloat(zfar)). But should I use the projectionTransform from the node, or what about the modelViewProjectionTransform from the node? Could I just multiply nodeUniforms.modelViewProjectionTransform * xyzw in the shader? If you need more clarification about what I am trying to do, let me know! Thanks
Posted Last updated
.
Post not yet marked as solved
3 Replies
990 Views
I'm trying to make a DrawableQueue with a Descriptor that takes a pixel format of .depth32Float. However, this is returning with a fatal error: Could not create Depth Drawable Queue: unsupportedFormat. Is there any alternative to .depth32Float for the TextureResource.DrawableQueue.Descriptor? Thanks!
Posted Last updated
.
Post not yet marked as solved
1 Replies
785 Views
Hi! I am using ARSCNView to render a SCNNode that conforms to SCNShadable. I am getting a black screen instead of the camera feed, and I am getting these errors every draw call: [GPUDebug] Texture usage flags mismatch executing fragment function "background_video_alpha_0_frag" encoder: "0", draw: 0 [GPUDebug] Texture usage flags mismatch executing fragment function "background_video_frag" encoder: "1", draw: 0 I have searched the project for metal shaders named "background_video_alpha_0_frag" and "background_video_frag", but have found none. I have also searched the project for any residual code from previous attempts that have anything to do with background. Nothing. During the time that these errors are printing to the console, I cannot capture the GPU to debug it either. Sometimes I can see the camera feed, like after a long break in development. However, when running it again without changing the code, I start to receive these error outputs. Does anyone have any ideas? Thanks!
Posted Last updated
.
Post not yet marked as solved
3 Replies
971 Views
I am translating an app I have created from Metal to RealityKit because it was too difficult to hand-roll raycasting and transformations in pure Metal AR application. Instead of rendering camera feed and point cloud data onto a MTLView via command encoder, I need to use an ARView (via RealityKit) so that RealityKit can power the spacial transformations of the AR objects. I am trying to use Custom Materials in RealityKit to render the metal geometry and surface shaders on top of a plane. However, this standard plane only has 4 vertices. For my geometry modifier to work, I need a couple thousand vertices on the plane. However, when I made my own custom mesh using the code from codingxr.com (below), there is so much lag when running the app. I need lots of vertices so that I can access these vertices in the geometry shader modifier. Is there a way to create a performant many thousand vertices mesh in RealityKit? Or is there a way to make Metal and RealityKit to work in the same scene so that I can render to a MTKView but also take advantage of the power of ARView? private func buildMesh(numCells: simd_int2, cellSize: Float) -> ModelEntity { var positions: [simd_float3] = [] var textureCoordinates: [simd_float2] = [] var triangleIndices: [UInt32] = [] let size: simd_float2 = [Float(numCells.x) * cellSize, Float(numCells.y) * cellSize] // Offset is used to make the origin in the center let offset: simd_float2 = [size.x / 2, size.y / 2] var i: UInt32 = 0 for row in 0..<numCells.y { for col in 0..<numCells.x { let x = (Float(col) * cellSize) - offset.x let z = (Float(row) * cellSize) - offset.y positions.append([x, 0, z]) positions.append([x + cellSize, 0, z]) positions.append([x, 0, z + cellSize]) positions.append([x + cellSize, 0, z + cellSize]) textureCoordinates.append([0.0, 0.0]) textureCoordinates.append([1.0, 0.0]) textureCoordinates.append([0.0, 1.0]) textureCoordinates.append([1.0, 1.0]) // Triangle 1 triangleIndices.append(i) triangleIndices.append(i + 2) triangleIndices.append(i + 1) // Triangle 2 triangleIndices.append(i + 1) triangleIndices.append(i + 2) triangleIndices.append(i + 3) i += 4 } } var descriptor = MeshDescriptor(name: "proceduralMesh") descriptor.positions = MeshBuffer(positions) descriptor.primitives = .triangles(triangleIndices) descriptor.textureCoordinates = MeshBuffer(textureCoordinates) var material = PhysicallyBasedMaterial() material.baseColor = .init(tint: .white, texture: .init(try! .load(named: "base_color"))) material.normal = .init(texture: .init(try! .load(named: "normal"))) let mesh = try! MeshResource.generate(from: [descriptor]) return ModelEntity(mesh: mesh, materials: [material]) }
Posted Last updated
.
Post not yet marked as solved
0 Replies
1.2k Views
Hi there! I have accomplished rendering the point cloud via a metal texture with depth. I plugged in gestures to manipulate the object over the top of the camera feed. I am able to investigate up close it like any volumetric point cloud. However, I am trying to anchor it to an ARAnchor so that I can move around my physical space and investigate the stationary cloud. I have an ARSession running, as well as a custom Renderer that handles Metal. I think it comes down to getting the final view matrix, which is then set into the renderEncoder renderEncoder.setVertexBytes(&finalViewMatrix, length: MemoryLayout.size(ofValue: finalViewMatrix), index: 0) I believe I can solve the anchor issue by doing the correct matrix math. To do this, I guess that I need the world matrix that the anchor is in. Then also the local model matrix of the anchor I would use to multiply with the model matrix of the point cloud, thus parenting it. Then I could multiply the projection matrix and view matrix with the model matrix. Does this sound like a sound way to go about the issue? I have already tried many methods and haven't quite achieved it, especially when moving the physical device forwards and backwards -- it moves with the device. Thank you!
Posted Last updated
.