Posts

Post not yet marked as solved
0 Replies
410 Views
I'm trying to get my app to function like the iOS Contacts app where either the Tab key or the Enter key can be used to progress from field to field. I have things working for the Enter key by changing the FocusState when onSubmit is called, but this doesn't work for the Tab key. This isn't anything to do with system settings since iOS Contacts works as expected. How am I supposed to listen for the tab key in iOS to change the focus?
Posted
by slmille4.
Last updated
.
Post not yet marked as solved
0 Replies
804 Views
For https://developer.apple.com/videos/play/wwdc2021/10023/?time=411 I have a similar case in my app, but SwiftUI doesn't seem to be able to sanely handle the case where the user hits the submit button after typing instead of hitting Return or the "Done" button like they do in the video. In my case I have a focus listener on the Textfield and want to perform a custom action when the Textfield loses focus. If they manually hit the submit button in the UI instead of using the keyboard, the textfield doesn't lose focus or at least the handler isn't called. Is there some way to make sure that the lose focus handler is called on the text field when they tap the submit without a bunch of kludgey code creating dependencies between the components?
Posted
by slmille4.
Last updated
.
Post marked as solved
1 Replies
842 Views
We're having problems with images being sent to our CoreML model at unsupported image orientations. I'm setting the orientation parameter in VNImageRequestHandler, and I'm also using the OCR model so I know that the parameter is being sent correctly. At https://developer.apple.com/documentation/vision/classifying_images_with_vision_and_core_ml it says Most models are trained on images that are already oriented correctly for display. To ensure proper handling of input images with arbitrary orientations, pass the image’s orientation to the image request handler. but I'm not sure whether the rotation is supposed to happen automatically or whether they're implying that the model can read the orientation and rotate the image accordingly.
Posted
by slmille4.
Last updated
.
Post not yet marked as solved
0 Replies
444 Views
Can someone from Apple please clarify the company's position for developing 3rd-party apps for Bluetooth hardware? This is IRT the article "Apple rejects 3rd-party Tesla app update as it strictly enforces written consent for third-party API use" Does Apple require permission from the manufacturer before using Bluetooth GATT Services?
Posted
by slmille4.
Last updated
.
Post not yet marked as solved
0 Replies
473 Views
Will a model deployment accessed via "MLModelCollection.beginAccessing" sync for apps deployed within an enterprise via Intune, or only for apps deployed via the Apple app store?
Posted
by slmille4.
Last updated
.
Post not yet marked as solved
1 Replies
948 Views
I'm looking through the ARKitInteraction sample app, and there are a few things I don't really understand about how the trackedRaycast transform and the ARAnchor transform area related.1. In the trackedRaycast handler, the result of the raycast is assigned to the SCNNode's transform. However, that object is already attached to an ARAnchor, and in the documentation, it says "Adding an anchor to the session helps ARKit to optimize world-tracking accuracy in the area around that anchor, so that virtual objects appear to stay in place relative to the real world. If a virtual object moves, remove the corresponding anchor from the old position and add one at the new position."Doesn't this mean the Anchor and the trackedRaycast are giving conflicting instructions?2. In the ARSCNViewDelegate'sfunc renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor)implementation, there's the callobjectAtAnchor.simdPosition = anchor.transform.translationDoesn't the fact that didUpdate is being called mean that the anchor transform was already set to the object transform? I'm guessing this is to help the trackedRaycast functionality work with the anchor functionality, but I don't see how.
Posted
by slmille4.
Last updated
.
Post marked as solved
1 Replies
683 Views
I'm trying to convert some hitTesting code to use raycasting, but I don't see an equivalent of ARHitTestResult.ResultType.existingPlaneUsingExtent? "allowing: .estimatedPlane, alignment: .vertical" returns null for some of the anchors when .existingPlaneUsingExtent returns values for all the anchors, and "allowing: .estimatedPlane, alignment: .any" doesn't seems to return values for any anchors, just the featurePoint normals. Not sure if I'm missing something or this is a bug or what.
Posted
by slmille4.
Last updated
.
Post not yet marked as solved
1 Replies
744 Views
When an ARAnchor is created, it is passed a transform with a real-world position and orientation where it should be located. Then that ARAnchor gets an SCNNode which uses that transform as an origin, and as the ARAnchor's connection to the real-world is adjusted, the SCNNode's location is also adjusted. What happens if the SCNNode's transform is set to a new value, something like SCNMatrix4(hitTestResult.worldTransform)? Is the connection to the ARAnchor lost and is the SCNNode just floating around in SceneKit space with no direct connection to the real world?
Posted
by slmille4.
Last updated
.
Post marked as solved
1 Replies
765 Views
At https://developer.apple.com/documentation/arkit/arscnview/providing_3d_virtual_content_with_scenekitthere is the line of code:planeNode.position = SCNVector3Make(planeAnchor.center.x, 0, planeAnchor.center.z)Why is the y coordinate 0? Why not use the y coordinate of the anchor?
Posted
by slmille4.
Last updated
.