Yeah, but since the whole block will run/block the MainActor later, the question of OP I think is why is it done this way.
Post
Replies
Boosts
Views
Activity
The general technique would be to create your own actor backed with serial executor backed with a dispatch queue that you create, can access, and can then pass to AVCapture APIs that expect queue… then all code will be executed on that queue and can coexist with the code on actor.
so yes, you can do it, but I am exactly not sure whether it will build cleanly and work without issues on 5.10, should be possible easily from swift 6.
@ninh.asus Just to add couple points to @eskimo's excellent answer.
I'd suggest first look into why you need a CFRunLoop or NSRunLoop in the first place. I bet that many (if not all) regular use cases (timers, socket read/write notifications) have better equivalents in DispatchSource.
I just went through the experience myself by updating couple regular Objective-C libraries from CFRunLoop to DispatchSource to take them off the main queue and all the building blocks are there.
Writing to confirm that live/deferred mode does not work.
Last X seconds recording mode does work.
Also, Swift concurrency template is also affected by not showing any data about Swift Tasks and Actors. Last X seconds mode is the only usable one at the moment.
Bug updated.
I think you should set rmtAddrLength to sizeof(rmtAddr) before calling recvfrom, so that recvfrom knows how much space it can use to fill in the address.
@ckarcher I have gotten it approved for multiple apps, typcially within a week, but I do not remember anymore if I had to actually re-submit some of them after some time waiting. But yes, I got the entitlement...
@eskimo I was able to craft together some real device tests from my MBA to an iPhone 14 Pro over WiFi.
The iPhone app is complied in release mode and always uses BSD sockets for easy comparison, so I am essentially testing the MBA/macOS stack.
On the MBA side I profile the test in Instruments with deferred mode recording. No data processing takes place.
The first test uses 1 million UDP packets sent from the MBA to the iPhone 14 Pro to see how fast I can dump them to the network.
Via BSD sockets in 17 seconds, via NWFW in 6 seconds. Repeated multiple times.
The second test uses 1 million UDP packets dumped from the iPhone 14 Pro to the MBA to see how fast they can be received.
Via BSD sockets in 51 seconds, via NWFW in 53 seconds.
In all cases most of the time is spent in sendmsg / recvmsg, or nw_connection_add_read_request_on_queue / nw_connection_add_write_request_on_queue.
These results look much better. Continuing to profile more!
It's happening on the latest everything as of today:
macOS Sonoma 14.4 (23E214)
Xcode/Instruments Version 15.3 (15E204a)
P.S. I thought it may be related to some .xcodeproj internals, but It also does not work when I try to profile one specific test case from a super simple Swift Package that includes the code using os_signpost API.
I have a code that I was instrumenting for years. Recently I pulled it up in Xcode 15 and neither os_signpost nor pointsOfInterest work.
Thanks @eskimo for the explanation. You nailed it completely. So I reused your example code which explains the bug much more clearly than my original description and submitted a bug FB13678278.
Our app (https://cloudbabymonitor.com) actually back deploys to iOS 12 at the moment so I hoped to switch to Network.framework with the next update that will support iOS 13+, but this bug is a stopper, as our app relies at opening multiple UDP flows from the same local port, and that seems to be broken, and I am not sure we will ever see this fixed in iOS 13+ (since iOS 12 is no longer receiving updates).
As for the watch, we do stream audio and video at the same time, so on the Watch that should satisfy the requirement for using the Network.framework. However it all comes down to the final experience. At the moment we can establish successfully UDP audio/video stream between two iPhones on WiFi in about 1 second - and that includes Bonjour discovery, DNS resolution, and everything else. I'm not sure yet I can get to 1 sec on the Watch from looking at the wrist and seeing live video, but I want to have the code ready and see it running and try to get it that fast purely for the engineering pleasure of it :) - if not for the happiness of our customers who beg for the Apple Watch live video feature now for years already :)
thanks for your help,
Martin
Thanks @eskimo for your insights, as always.
We use http://enet.bespin.org for reliable peer to peer communication, it's very old and very reliable UDP library, not dissimilar to QUIC - except that it existed way before QUIC. It provides multiple independent streams of data (reliable and unreliable) - on top of one UDP flow. It was created for multiplayer games and is super easy to work with.
I have a nice and small Swift based API written for that, in production on Linux and iOS/macOS since 2014 - that I plan to open source at some point.
I want to move it over to Network.framework - to make it work on the Apple Watch where BSD sockets do not work, and to gain performance by moving away from BSD sockets. Networking guys (from Apple) I had a chance to talk to claimed that moving to Network.framework will make everything faster...
I will try to explain as easy as possible what the problem is. Please bear with me :)
On each device you bind one UDP socket to a random port. And that socket/port is used for all outgoing and incoming communication with the other peers. The UDP datagrams contain a simple protocol that handles streams, retransmits, datagram re-ordering, etc.
So imagine App1 opens UDP Socket1, and binds it to ::Port1.
App2 opens Socket2, and binds it to ::Port2.
App3 open Socket3, and binds ::Port3.
Communication from App1 to App2 goes via local Socket1 leaving the machine on Port1 and going to App2 via Socket2 and remote Port2.
Communication from App1 to App3 goes also via local Socket1 leaving the machine on Port1 and going to App3 via Socket3 and remote Port3.
Communication from App2 to App3 goes also via local Socket2 leaving the machine on Port2 and going to App3 via Socket3 and remote Port3.
Replies the other way around.
In the BSD sockets world this is possible even on the same networking interface, and in the same UNIX process, because Socket1, Socket2, and Socket3 are independent file descriptors in the kernel and sendmsg and recvmsg can use these sockets to send to and receive from any UDP address. Via IIRC something that is called "unconnected UDP socket" mechanism.
And you essentially use the same fd for both sendmsg and recvmsg.
In the Network.framework world, Listener1 will allow me to receive UDP datagrams on Port1. Listener2 on Port2, and Listener3 on Port3.
Connecting from App1 (Port1) to App2 (Port2) is only possible by creating new NWConnection with requiredLocalEndpoint set to Listener1.requiredLocalEndpoint.
This works fine, as long as Port2 is on another networking interface or another machine.
But it breaks when Port1, and Port2 are used via NWListener or NWConnection in the same (UNIX) process and on the same networking interface. Say for example in one XCTestCase.
So if I want to create a peer1 (NWListener1) and peer2 (NWListener2) both bound to localhost, and then open a NWConnection1to2 it will break.
If I bind peer1 to en0 and the peer2 to lo0 it will work.
I have no insight into how is the Network.framework implemented, and also this only affects NWListeners and NWConnections using UDP, in the same process, and on the same interface. So not a real world use case, but only a testing use case.
Sorry for the long and chaotic description. I have a simple swift package demonstrating the code ready, and if you want I can open a DTS request to investigate this further.
At the moment I am proceeding in a way that for XCTestCase I create one peer using Network.framework and the other peer using BSD sockets and shuffle data between them.
But I would love to verify all the functionality when both peers use Network.framework. Which is easier over lo0.
I am thinking along the lines of enumerating all interfaces on the machine and binding peer1 to one interface (say en0) and the other to another interface (say en1) so that they can communicate together successfully.
I found this article quite enlightening: https://lucasvandongen.dev/swift_actors_and_protocol_extensions.php
I get this reported from time to time by random customers. Probably an iOS bug. Never found a solution, updating iOS, deleting the app, reinstalling the app, rebooting the device -> to force show the prompt again sometimes fixes it… othertimes not...
You need to dive into Build Settings tab and there search for Deployment Target and add specific iOS version for any macOS SDK see the screenshot...
HTH,
Martin
How do you create/bind the socket?
What flags do you use (for proto?)
Both sockets above are owned by PID 89318, what PID is that?