Posts

Post not yet marked as solved
18 Replies
Just filed this as FB12538400. Please file a Feedback on this as well, so that this can get fixed in the next beta.
Post not yet marked as solved
18 Replies
Looks like the @Observable macro is broken in beta 3 if you target visionOS. If you download the Observation sample project, it builds fine as is. However as soon as you add visionOS as a target and build for the visionOS simulator, the same error appears.
Post not yet marked as solved
2 Replies
I don't know, but I would look into using the SceneReconstructionProvider in combination with the OcclusionMaterial.
Post not yet marked as solved
6 Replies
What I discovered is that you can indeed use RealityKit's head anchor to virtually attach things to the user's head (by attaching entities as children to the head anchor). However the head anchor's transform is not exposed, it always remains at identity. Child entities will correctly move with the head, but if you query their global position (or orientation) using position(relativeTo:nil), you just get back their local transform. Which means that it seems currently impossible to write any RealityKit systems that react to the user's position (for example a character looking at the user, like the dragon in Apple's own demo), without getting it from ARKit and injecting it into the RealityView via the update closure. I don't know if this is a bug or a conscious design decision. My guess is that it was an early design decision that should have been but wasn't revised later. That initially the thinking was that the user's head transform should be hidden for privacy reasons. But later engineers realized that there are applications (like fully custom Metal renderers) that absolutely need the user's head pose, so it was exposed after all via ARKit. It is probably worth filing a feedback on this, because I can't see how it would make sense to hide the head anchor's transform, if the same information is accessible in a less performant and convenient way already.
Post not yet marked as solved
3 Replies
In the WWDC Slack, an Apple employee answered this question: Apps do not get access to eye tracking data. No exceptions, including for medical research purposes.
Post marked as solved
3 Replies
I am still not quite sure about the correct way to implement this. I want to establish a single working connection between my two devices and then shut down the NWListener and start using the established connection in my app. I was expecting only one of the incoming connections to enter the .ready state after accepting them, but they all do. They are also all reported as viable. My current solution is to send an initial message from the client over the connection and try to read from all the connections returned by the NWListener (once they are .ready ), and use the one connection through which I receive this message. This does work. But the warning messages the Network Framework prints to the console make me feel like I am doing it wrong?: [connection] nw_flow_add_read_request [C2 fe80::53:558d:8f79:9a0c%en2.60902 ready channel-flow (satisfied (Path is satisfied), viable, interface: en2, scoped)] already delivered final read, cannot accept read requests [connection] nw_read_request_report [C2] Receive failed with error "No message available on STREAM" Error receiving next message: POSIXErrorCode(rawValue: 96): No message available on STREAM. I am calling receiveMessage() on the connection immediately after it enters the .ready state. Is there a better way? Maybe wait for some amount of time, during which all but one connection should change from .ready to .cancelled? But what is the shortest time threshold that is guaranteed to be long enough?
Post not yet marked as solved
5 Replies
Allocate a TCP port dynamically, then try to allocate the same port on UDP, and loop on failure. Thank you. That sounds like a reasonable plan to me. What does 'loop on failure' mean? Repeatedly try to listen and connect on the same port? How likely is failure in this scenario (local connection between two devices with Bonjour providing the TCP connection).
Post not yet marked as solved
5 Replies
Thank you! Actually the second option sounds slightly easier to me. At least that way I am 100% guaranteed that both connections talk to the same device. Is there a downside to just picking some fixed port number (say 649) and using that for the UDP connection? Is there a risk that some wireless routers (I am just connecting locally) would block that port and Bonjour would find a non-blocked port?
Post marked as Apple Recommended
Thank you for acknowledging the issue. I assume at this point it is a given though that this bug will ship in 15.1 to the public on Monday, right?
Post marked as Apple Recommended
Yes, it is really unfortunate that it looks like this bug is going to ship in 15.1. I just confirmed that it is possible to work around this by getting the corresponding PHAsset via the assetIdentifier and then using the PHAssetResourceManager to request the PNG image data. Of course this requires asking for read access permission to the photo library, and dealing with all the complexities that come along with that.
Post not yet marked as solved
1 Replies
I just ran into the same thing. My guess is this is a bug in the CoreML compiler or on-device scheduler, where it tries to put part of the network onto the Neural Engine, even though it contains convolutions that need more memory than the Neural Engine can handle.
Post marked as solved
1 Replies
This was recently acknowledged as an Xcode 13 bug here: https://github.com/apple/coremltools/issues/1301
Post marked as solved
6 Replies
Perfect, thank you for the quick reply and excellent workaround! Much appreciated!