Post

Replies

Boosts

Views

Activity

Reply to Network privacy permission check
With WebRTC there is a possibility of mDNS being used to discover Peers (not via the Bonjour APIs). Other times, TCP or UDP candidates might be used to send and receive media within the scope of the local network. In strict client to server streaming cases it may be easy rule out local network use but other times it is not since the protocol is designed for general Peer-to-Peer connectivity. Since POSIX networking APIs are being used by WebRTC, it would be nice for software developers to have a way to determine the authorization status before hand, and if the status changes. We can take actions like not gathering host candidates if permission is not granted for the local network. Alternatively we could filter any candidates that are gathered so that use of the local network is avoided.
Jun ’20
Reply to Broadcast extension limit vs large frames on iPad
I know that Apple strongly encourages the use of VideoToolbox for RT communication, but it is not viable in every use case. Specifically, the VP8 software encoder in WebRTC is very mature and well optimized for screenshare content while the H264 implementation is quite lacking in this area. This is not a slight on VideoToolbox as I think VTCompressionSession does offer adequate control planes. So far, Piyush and I found that it is possible to use vp8 on iOS devices with some downscaling and fit within the 50 MB limit. The problem is that ReplayKit.framework seems to be very memory hungry when the device is processing constrained, like when launching apps or using some home screen features during a broadcast. I have observed up to 8 ReplayKit buffers on an iPhone 7 and 6 on an iPad Pro 12.9" Gen 2 leading to memory exhaustion and a resource limit crash. This is speaking for iOS 13.5.1: Memory peak for ReplayKit = 6 * 1920 * 1440 * 1.5 = 23.7 MB In practice the extension really only has about 50% of the memory budget when processing gets backed up. The queue that creates the RPIOSurface objects seems to not be dropping the IOSurfaces when the system is backed up. It would be nice if the queue were sized smaller, or if we could ask for smaller sized buffers that take up less memory.
Jun ’20
Reply to Network privacy permission check
We have filed FB7801261 against 14.0 beta1, and there has been no movement on APIs as of 14.0 beta3. There is quite a bit of information on the ticket from both RCP and myself about the use case and how the new APIs differ from existing permissions APIs in CoreLocation and AVFoundation that we rely on every day. Responding to the comments from Apple Frameworks Engineer: If you have a case where you'd like to know if your app doesn't have permission, can you describe it? Is this for using Bonjour, or a direct connection use case? A direct connection. As we have tried to explain there are thousands of apps that use WebRTC, which under the hood uses ICE. The most common configuration of ICE attempts to establish a direct connection with POSIX TCP and UDP sockets. When a PeerConnection is negotiated, the Peers are ignorant of their network topology and exchange candidates, testing pairs in an attempt to establish a low latency connection on the cheapest network route possible. Traffic may go through a relay server, but this is often the last check performed due to the time needed to perform TURN allocations and grant permissions for the peers. The first check being performed is often on the pairing of host candidates from each device. If permissions are declined then this check kills the socket and WebRTC does not recover it effectively. I will stop here, as it appears that I am not allowed to attach my sequence diagram. I believe we included it in the FB, but I will add it again to be sure. We can skip the host candidate checks (and optional mDNS discovery phase) if permissions are declined, but not without an API to check the state of the user's permissions and reconfigure our Ice agent accordingly.
Jul ’20
Reply to Network privacy permission check
I had one more thought about non-recoverable scenarios as I investigate this problem further in beta3. For active use not during testing, we generally recommend against pre-flight checks wherever possible, however the error codes returned from attempts to use local network resources should provide enough information to display/do the right thing if such access is critical, or appropriately handle using only the remaining options if that particular resource isn’t the only option for achieving the user’s intent. If you can find places where this isn’t the case, we’d love to know about it! If the Ice transport protocol is UDP then the host, server reflexive and relay candidates share the same base transport address (protocol:ip:port). This means that terminal socket errors received due to host candidate checks can cause the stun and relay candidates to never be gathered. Using a pre-flight permissions check would allow us to configure the agent so that host checks are avoided and the UDP socket is not closed. I will investigate handling the socket level error, but this is a not insignificant change to a complex and cross-platform code base. The relationship between the candidates is demonstrated in rfc8445-2.1. For the sake of discussion imagine that candidates are tested in the order of X:x, X1':x1', Y:y. The process only reaches X:x if the first check uses the local network. To Internet											| 											| | /------------ Relayed Y:y | / Address--------+ | | | TURN | | Server | | |--------+ | | | /------------ Server X1':x1'|/ Reflexive------------+ Address | NAT |------------+ | | /------------ Local X:x |/ Address--------+ | | | Agent | | |--------+ 								 Figure 2: Candidate Relationships
Jul ’20
Reply to AVAssetResourceLoaderDelegate and TS files error: Error Domain=CoreMediaErrorDomain Code=-12881 "custom url not redirect"
Hi, First off, this thread has been very helpful! The only response to an AVAssetResourceLoadingRequest for a segment (.ts) file that AVPlayer will accept is a redirect to an HTTP URL. respondWithData will be rejected. I am wondering if the same limitation exists with .fmp4 files? I want to feed real-time video delivered using WebRTC into an AVPlayerLayer that is backing AVPictureInPictureController. I am doing this since AVPictureInPictureController does not support AVSampleBufferDisplayLayer, which is how I would think to solve this problem. After watching the WWDC 2020 sessions, I am using the AVAssetWriterSegmentation APIs to transcode the input and then generating the m3u8. Users can tolerate some delay if the content is a presentation screen share, but I don't want to run a local webserver when I already have the fmp4s and m3u8 in memory. The transfer of the segments is going to be very quick since it will be happening on device. Thanks again for the helpful discussion.
Jul ’20