Interesting. So people use a new stream even for sending tiny control messages? I guess that makes sense if creating the stream is basically free. Though I assume you lose all guarantees about the order in which messages will be received?
In the end I think performance of the current approach should be fine (of course I'll profile it). Even if there is an additional memory copy (not sure there is), I doubt it would make a meaningful difference.
Post
Replies
Boosts
Views
Activity
My workaround was to rip out the (very basic) custom protocol framer and implement the same packaging logic on top of NWConnection's byte stream send and receive methods. Which is actually much simpler code, but maybe less performant? (Because of an additional data copy?)
Thanks, yes, I much prefer replies over (the very hidden) comments. I just thought using a reply to reply to a (kind of invisible) comment would be weird/wrong.
I did just file this as FB15694053
Thank you for your update on the other FB, much appreciated!
I filed enhancement request FB15647226 for the initial issue of NWConnectionGroup not supporting connections to Bonjour service endpoints.
Thank you, I tried it but it didn't make any difference.
However I just figured it out: The issue was that I was adding my custom protocol framer options into the NWParameters in addition to the QUIC options when establishing the tunnel / connection group. It seems that tripped up nw_endpoint_flow_setup_cloned_protocols.
Removing the custom framer allows me to now successfully create a NWConnection from the group.
Next I'll have to figure out how to get my custom framer added back into the individual NWConnections.
It seems that even though the .parameters property is a constant, I can still modify it because it is a reference type. So hopefully that will work. 🤞
I just filed the enhancement request as FB15642049
I did (try to) implement the workaround of using Bonjour just for device discovery, resolving the hostname and then trying to establish a QUIC tunnel (multiplexed NWConnectionGroup) using that hostname and a hardcoded port. Unfortunately I did not get very far before hitting another wall.
Creating the NWConnectionGroup using my QUIC parameters and a .hostPort() endpoint worked without getting an error. On the other device I am creating a NWListener with QUIC parameters and the hardcoded port. Its newConnectionGroupHandler is called, and after accepting it (by calling start(queue:..)) the NWConnectionGroups on both devices enter the .ready state. Great!! 🥳
Next I am getting a QUIC stream (NWConnection) from the group by calling .extract(), and call .start(queue:..) on it. Unfortunately this is where I hit a wall again.
The logs show the following errors:
nw_endpoint_flow_setup_cloned_protocols [C4 fe80::1c46:3f2a:b0a6:c276%en0.7934@en0 in_progress channel-flow (satisfied (Path is satisfied), interface: en0[802.11], scoped, ipv4, ipv6, dns, uses wifi)] could not find protocol to join in existing protocol stack
nw_endpoint_flow_failed_with_error [C4 fe80::1c46:3f2a:b0a6:c276%en0.7934@en0 in_progress channel-flow (satisfied (Path is satisfied), interface: en0[802.11], scoped, ipv4, ipv6, dns, uses wifi)] failed to clone from flow, moving directly to failed state
and the connection ends up .cancelled.
The exact same error messages were being discussed in this older thread on QUIC multiplexed groups.
However in that thread the issue ended up being different QUIC options being used for establishing the tunnel and the individual streams. In my case I am letting the stream inherit the QUIC options from the group/tunnel (though I also tried passing them in explicitly). I am using the same QUIC options that had been working for establishing a single NWConnection.
Thank you for the quick reply.
Unfortunately I don't quite understand how I would do this.
I assume I would still use a NWListener with a NWListener.Service to advertise my service over Bonjour?
But even if I don't need to resolve the hostname and instead hardcode it (in my case as "_aircam._udp.local."), I'd still need to specify a port to use .hostPort(), right?
And NWListener doesn't allow me to choose my own port as far as I can tell.
I assume nw_group_descriptor_allows_endpoint is a closed source function?
Thank you. That's unfortunate. I'll file an enhancement request as you suggested.
Thank you for the answer, Quinn, I appreciate it!
Would NWConnection.ContentContext.relativePriority work to prioritize messages relative to each other? That is, would they get re-ordered based on their assigned priorities? Presumably only as long as multiple messages are queued in the on-device send buffer?
And if this does work, I assume it would only work within a single NWConnection? Or would messages from multiple NWConnections sharing a single QUIC tunnel also get re-ordered based on their relative priorities?
My context is that I am working on a (relatively) simple custom video streaming protocol built on top of QUIC, trying to use some ideas from the "Media over QUIC" project. One of those is using a QUIC stream per frame so that I can cancel transmission of old queued frames if there is congestion, while also prioritizing some frames (I-frames) over others.
Just filed this as FB12538400. Please file a Feedback on this as well, so that this can get fixed in the next beta.
Looks like the @Observable macro is broken in beta 3 if you target visionOS.
If you download the Observation sample project, it builds fine as is. However as soon as you add visionOS as a target and build for the visionOS simulator, the same error appears.
I don't know, but I would look into using the SceneReconstructionProvider in combination with the OcclusionMaterial.
What I discovered is that you can indeed use RealityKit's head anchor to virtually attach things to the user's head (by attaching entities as children to the head anchor).
However the head anchor's transform is not exposed, it always remains at identity. Child entities will correctly move with the head, but if you query their global position (or orientation) using position(relativeTo:nil), you just get back their local transform.
Which means that it seems currently impossible to write any RealityKit systems that react to the user's position (for example a character looking at the user, like the dragon in Apple's own demo), without getting it from ARKit and injecting it into the RealityView via the update closure.
I don't know if this is a bug or a conscious design decision. My guess is that it was an early design decision that should have been but wasn't revised later. That initially the thinking was that the user's head transform should be hidden for privacy reasons. But later engineers realized that there are applications (like fully custom Metal renderers) that absolutely need the user's head pose, so it was exposed after all via ARKit.
It is probably worth filing a feedback on this, because I can't see how it would make sense to hide the head anchor's transform, if the same information is accessible in a less performant and convenient way already.
In the WWDC Slack, an Apple employee answered this question: Apps do not get access to eye tracking data. No exceptions, including for medical research purposes.