Post

Replies

Boosts

Views

Activity

UIButton starts out focused but can't navigate back to it
I'm an experienced iOS developer but new to tvOS, and I'm finding the focus engine to be somewhat confounding. My app launches with a UINavigationController, whose root view is a UITableViewController. When I select a row, a UIViewController is pushed. That view controller contains a bunch of nested UICollectionViews. One of the collection view cells contains 3 UIButtons. When the view first appears, the first of those (Button1) has focus. I can move focus from Button1 to Button2, and from Button2 to Button3. I can also go back from Button3 to Button2. But I cannot navigate back to Button1, and I'm trying to figure out why. I have focus logging turned on. It's very verbose, but here are the parts that seem useful to me: When the view controller displays initially, I see Updating focus from <UITableViewCell: 0x104841a00> to <DXETV.CustomButton: 0x104232610> in focus system <UIFocusSystem: 0x60000370c900>. The result of the focus update was determined from the following preferred focus search: | | Starting preferred focus search. | <UINavigationController: 0x10600f000> | └ <DXETV.RenderExampleViewController: 0x10484ae00> | └ <DXETV.RenderExampleView: 0x10421ab70> | └ <DXETV.LayoutElementView: 0x107021000> | └ <DXETV.LayoutCollectionView: 0x10707d200> | └ <DXETV.LayoutViewCell: 0x10681ee00> | └ <DXETV.LayoutElementView: 0x106862000> | └ <DXETV.LayoutCollectionView: 0x106865a00> | └ <DXETV.LayoutViewCell: 0x106851000> | └ <DXETV.LayoutElementView: 0x104891800> | └ <DXETV.LayoutCollectionView: 0x10488da00> | └ <DXETV.LayoutViewCell: 0x1048de600> | └ <DXETV.CustomButton: 0x104232610> | (info) It's focusable! | This seems right... focus moves from the table view cell to Button1, and this is he view hierarchy I expect. Then we see Creating focus scroll animator entry for environment <DXETV.LayoutCollectionView: 0x10707d200> This is the topmost collection view. This is followed by many lines about locking and unlocking this collection view, followed by Removing focus scroll animator entry for environment <DXETV.LayoutCollectionView: 0x10707d200> I don't know if this is normal or not. After I move focus from Button1 to Button2, I see Updating focus with context <UIFocusUpdateContext: 0x6000033200a0: previouslyFocusedItem=<DXETV.CustomButton 0x104232610>, nextFocusedItem=<DXETV.CustomButton 0x104312900>, focusHeading=Down>: Moving focus from <DXETV.CustomButton: 0x104232610> to <DXETV.CustomButton: 0x104312900> in focus system <UIFocusSystem: 0x60000370c900>. Which seems correct. When I move focus from Button2 to Button3, I get this, which is now expected: Updating focus with context <UIFocusUpdateContext: 0x60000330c5a0: previouslyFocusedItem=<DXETV.CustomButton 0x104312900>, nextFocusedItem=<DXETV.CustomButton 0x1043134d0>, focusHeading=Down>: Moving focus from <DXETV.CustomButton: 0x104312900> to <DXETV.CustomButton: 0x1043134d0> in focus system <UIFocusSystem: 0x60000370c900>. Followed by another round of creating and removing a focus scroll animator entry, this time for the middle collection view. Moving from Button3 back to Button2 also looks as expected: Updating focus with context <UIFocusUpdateContext: 0x600003318f00: previouslyFocusedItem=<DXETV.CustomButton 0x1043134d0>, nextFocusedItem=<DXETV.CustomButton 0x104312900>, focusHeading=Up>: Moving focus from <DXETV.CustomButton: 0x1043134d0> to <DXETV.CustomButton: 0x104312900> in focus system <UIFocusSystem: 0x60000370c900>. But here, everything stops. When I press the up arrow again to go back to Button1, nothing happens. Nothing is printed to the console, and the focused button does not change. Any hints as to what may be wrong or how to debug this further would be most appreciated!!!
0
0
313
Sep ’24
Custom layouts that specify both height and width
I have custom Layouts which provide a relative HStack or VStack (based on code from Paul Hudson's Pro SwiftUI book). However, I need my subviews to have frames that are relative to the parent view size in both directions. So if the height is 50% of the parent's height, the width should be 50% of the parent's width. I can't get this to work; specifying fixed sizes for both height and width to the call to ProposedViewSize results in unexpected and unexplainable (by me, anyway) results. I've searched for other custom layout examples and it seems like all of them specify either width or height and allow the other dimension to be flexible, which generally means that the subviews are hugging their content. Am I trying to do something that SwiftUI doesn't support? If not, can someone please explain or point me to an example? Thanks in advance!
0
0
221
Dec ’23
How to generate a plane that fills the whole ARView?
I am writing a small RealityKit app which watches for a specific image, and when the image is seen and then disappears, it starts a video playing (a plane with VideoMaterial). I'm using the height and width of the recognized image as the size of the mesh for the plane, but I would really like for the plane to fill the whole available space. Unfortunately, the ARView's frame is given in points, but the size of the mesh needs to be in meters. I can't find any way to make this conversion. Is what I want to do possible?
0
0
455
Mar ’23
Keep plane in a fixed location as phone moves
I am writing a small RealityKit app which watches for a specific image, and when the image is seen and then disappears, it starts a video playing (a plane with VideoMaterial). The plane's initial size and location are the same as the image. This plane moves around as I move the phone, and I want it to stay in place. The most frequent answer I see for this is to set the plane anchor's transform in didUpdate, but didUpdate is not called again after the image disappears. So how/where can I do this?
0
0
450
Mar ’23
Changing the color of a model loaded from a USDZ
I have an Entity, loaded from a USDZ file. I want to change the color of one of its modelEntities. I'm isolating that part of the model like this: let towelModel = towel.children[0].children[0] as? ModelEntity And that produces the correct ModelEntity: 'Towel_Plane_008' : ModelEntity Transform SynchronizationComponent ModelComponent I've tried changing its color three different ways: var material = SimpleMaterial() material.color = .init(tint: .white) towelModel.model?.materials[0] = material var material = SimpleMaterial() material.color = .init(tint: .white) towelModel.model?.materials = [material] var material = SimpleMaterial() material.color = .init(tint: .white) var comp: ModelComponent = towelModel.components[ModelComponent.self]! comp.materials = [material] towelModel.components.set(comp) None of which work. What is the correct way to do this?
0
0
469
Mar ’23
What is the correct way to position SceneKit nodes?
I have a small SceneKit app which recognizes an image and then places colored spheres on it, one in the center of the image and one on each of the four corners. If I add the sphere nodes to the root node, the code that positions the balls in the right quadrants looks like this:    let lowerLeft = SCNVector3(x - width/2, y - height/2, z)    let lowerRight = SCNVector3(x + width/2, y - height/2, z)    let upperRight = SCNVector3(x + width/2, y + height/2, z)    let upperLeft = SCNVector3(x - width/2, y + height/2, z)   let center = SCNVector3(x, y, z) Where x, y and z are set from the image anchor’s transform matrix. The image is vertical, so this means that the z axis is pointing towards the camera, +y is pointing up, and +x is to the right. This is all in world coordinates, since the spheres are “siblings” of the image. This situation is less than ideal because when you move the camera, the spheres move with it instead of staying with the image. If I add the sphere nodes as children of the image node, the code to place the spheres looks like this:    let lowerLeft = SCNVector3(x - width/2, y, z + height/2)    let lowerRight = SCNVector3(x + width/2, y, z + height/2)    let upperRight = SCNVector3(x - width/2, y, z - height/2)    let upperLeft = SCNVector3(x + width/2, y, z - height/2)    let center = SCNVector3(x, y, z) Now x, y and z are all set to zero, because our origin is now the origin of the anchor's node, which is located at the center of the image. This keeps the spheres with the image when the camera moves.  But the orientation of the axes (is that the plural of axis???) has changed. Now the y axis is pointing towards the camera, -z is pointing up, and -x is to the right. So the coordinate system has been rotated around x (to bring z to vertical) and then around z (to reverse the +/- of x). What this means is that in order to do the calculations properly in a generic case, I’d need to figure out which way the axes are pointing and account for that.  This seems wrong, like there should be some way of doing this where SceneKit has already taken that into account.  But I have done a lot of Googling and have not been able to find one.  I do see a lot of examples that say things like “to position a node 1 meter above another node, set its position to [0, 1, 0]”.  But that still requires knowing that the +y axis is “up”, even though no-one mentions it when they say this. This app also exhibits the same problem I wrote about in https://developer.apple.com/forums/thread/724778, which was a RealityKit app, but that is a separate issue.
0
0
641
Feb ’23
AR app behaving differently for different people
I have written a small app which recognizes an image and then places spheres on the four corners and in the center. This is the code which places them:       let width = Float(imageAnchor.referenceImage.physicalSize.width)       let height = Float(imageAnchor.referenceImage.physicalSize.height)       let x = imageAnchor.transform.columns.3.x       let y = imageAnchor.transform.columns.3.y       let z = imageAnchor.transform.columns.3.z       let lowerLeft = SIMD3<Float>(x - width/2, y - height/2, z)       let lowerRight = SIMD3<Float>(x + width/2, y - height/2, z)       let upperRight = SIMD3<Float>(x + width/2, y + height/2, z)       let upperLeft = SIMD3<Float>(x - width/2, y + height/2, z)       let center = SIMD3<Float>(x, y, z)       self.model_01.position = lowerLeft  // pink       self.model_02.position = lowerRight // blue       self.model_03.position = upperRight // red       self.model_04.position = upperLeft  // green       self.model_05.position = center     // yellow I have run this app on a 14 Pro Max and a X, both running iOS 16.3. On both devices the spheres that should be on the corners are placed quite a ways in from the actual corners, though still in a rectangular pattern of the same aspect ratio. When my co-worker runs the app built on his computer from the same source, using the same reference image, the spheres are placed at the corners as they should be. He has an 11 Pro running 15.7 and a X running 16.3, and gets the same result on both. Our values for width, height, x, y, and z are all the same, but somehow the outcome is still different. We've eliminated all the variables we can think of, like displaying the reference image on our laptop screens which are the same model. What could possibly be causing this???
2
0
685
Feb ’23
Node placed in didAdd moves with camera on smaller target
I'll apologize in advance in case this is either a dumb question or a poorly worded one. I'm learning to use ARKit via online resources which I'm finding to be a bit thin. I have an app which scans still images and places a cube on one that it recognizes. My first version of this app added the node in didAdd, which is how most sample code does it online: func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {     guard let imageAnchor = anchor as? ARImageAnchor else { return }     updateQueue.async {       let cube = self.createCubeNode() // returns an SCNNode with SCNBox geometry       let position = SCNVector3(x: imageAnchor.transform.columns.3.x,                                 y: imageAnchor.transform.columns.3.y,                                 z: imageAnchor.transform.columns.3.z)       cube.worldPosition = position       self.sceneView.scene.rootNode.addChildNode(cube)     }   } This worked as expected if the image was displayed on my monitor or on a TV, but when the image was on another iPhone, the cube wandered around a bit as I moved th scanning phone from side to side. I found many creative solutions to this in StackOverflow, none of which worked for me. I finally figured out, mostly by accident, that if I instead create the node in nodeFor and let ARKit add it, this does not happen.   func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {     guard let _ = anchor as? ARImageAnchor else { return nil }     return self.createCubeNode()   } This works, since it put the cube in the center of the image which is where I wanted it anyway, but I don't understand why it doesn't work properly from didAdd. Nor do I understand why it only happens when the displayed image is small. Thanks in advance!
2
0
787
Feb ’23
Image recognition not always transitive?
I have two images, A and B. if I put A in the trackingImages array on my ARImageTrackingConfiguration and then scan B with my app, it will be recognized. But if I reverse the images, so that B is in the trackingImages array and I scan A, it will not be recognized. This doesn't happen all the time, just with an occasional set of images. Is this expected? If so, why, and if not, how do I stop it from happening? Thanks in advance!
3
0
622
Feb ’23