I have a critical issue where my websocket will not connect to a server that is sitting behind an NGINX reverse proxy only on iOS 13. I have tested on both a real device and simulator with no success. It simply hangs on preparing. it never receives updates to the viabilityUpdateHandler and only ever enters the preparing state of the stateUpdateHandler. On any iOS greater or equal to iOS 14 it works seamlessly. I can connect to a local server that is not dealing with any certificates on iOS 13 no problem, but when my production server is in play it does not communicate something properly.
I am using NWConnection's NWProtocolWebSocket.
The setup is basic and straight forward
let options = NWProtocolWebSocket.Options()
options.autoReplyPing = configuration.autoReplyPing
options.maximumMessageSize = configuration.maximumMessageSize
if configuration.urlRequest != nil {
options.setAdditionalHeaders(configuration.urlRequest?.allHTTPHeaderFields?.map { ($0.key, $0.value) } ?? [])
_ = configuration.cookies.map { cookie in
options.setAdditionalHeaders([(name: cookie.name, value: cookie.value)])
}
}
if !configuration.headers.isEmpty {
options.setAdditionalHeaders(configuration.headers.map { ($0.key, $0.value) } )
}
let parameters: NWParameters = configuration.trustAll ? try TLSConfiguration.trustSelfSigned(
configuration.trustAll,
queue: configuration.queue,
certificates: configuration.certificates) : (configuration.url.scheme == "ws" ? .tcp : .tls)
parameters.defaultProtocolStack.applicationProtocols.insert(options, at: 0)
connection = NWConnection(to: .url(configuration.url), using: parameters)
The trust store is also straight forward
public static func trustSelfSigned(_
trustAll: Bool,
queue: DispatchQueue,
certificates: [String]?
) throws -> NWParameters {
let options = NWProtocolTLS.Options()
var secTrustRoots: [SecCertificate]?
secTrustRoots = try certificates?.compactMap({ certificate in
let filePath = Bundle.main.path(forResource: certificate, ofType: "der")!
let data = try Data(contentsOf: URL(fileURLWithPath: filePath))
return SecCertificateCreateWithData(nil, data as CFData)!
})
sec_protocol_options_set_verify_block(
options.securityProtocolOptions,
{ _, sec_trust, sec_protocol_verify_complete in
guard !trustAll else {
sec_protocol_verify_complete(true)
return
}
let trust = sec_trust_copy_ref(sec_trust).takeRetainedValue()
if let trustRootCertificates = secTrustRoots {
SecTrustSetAnchorCertificates(trust, trustRootCertificates as CFArray)
}
dispatchPrecondition(condition: .onQueue(queue))
SecTrustEvaluateAsyncWithError(trust, queue) { _, result, error in
if let error = error {
print("Trust failed: \(error.localizedDescription)")
}
print("Validation Result: \(result)")
sec_protocol_verify_complete(result)
}
},
queue
)
sec_protocol_options_set_min_tls_protocol_version(options.securityProtocolOptions, .TLSv12)
let parameters = NWParameters(tls: options)
parameters.allowLocalEndpointReuse = true
parameters.includePeerToPeer = true
return parameters
}
Post
Replies
Boosts
Views
Activity
I have a question in regards to setting up isMultitaskingCameraAccessEnabled for Picture in Picture capabilities on iOS 16 intended for video calls. In my CaptureSession flow I have set up the requirement to enable "MultitaskingCameraAccess" as indicated in the developer documentation.
https://developer.apple.com/documentation/avkit/accessing_the_camera_while_multitasking
https://developer.apple.com/documentation/avkit/adopting_picture_in_picture_for_video_calls
let captureSession = AVCaptureSession()
// Configure the capture session.
captureSession.beginConfiguration()
if captureSession.isMultitaskingCameraAccessSupported {
// Enable using the camera in multitasking modes.
captureSession.isMultitaskingCameraAccessEnabled = true
}
captureSession.commitConfiguration()
// Start the capture session.
captureSession.startRunning()
However isMultitaskingCameraAccessSupported is always false. My question is what dictates this setting? How can I support multitasking camera access by enabling this boolean to true?
I am using the Accelerate Framework to convert YUV Data to ARGB Data for a Video Call App. The framework works great, However when I hold calls I use a place holder image sent from the server. That image causes issues sometimes because of its size. Accelerate is telling me that it's range of interest is larger than the input buffer(roiLargerThanInputBuffer). I am not sure exactly how to address this issue. Any thoughts or suggestions would be greatly appreciated.
The problem was that my video buffer stream's pixel buffer width and height changed from the server side. That being said all that needed to be done is to check for when it changes and then remove the current vImage_buffer from memory and reinitialize a new one with the correct size.
Is it proper to tell the accelerate framework to change the vImage_buffer width and height this way. It seems to work well.
if myBuffer.height != destinationBuffer.height {
free(destinationBuffer.data)
error = vImageBuffer_Init(&destinationBuffer,
vImagePixelCount(myBuffer.height),
vImagePixelCount(myBuffer.height),
cgImageFormat.bitsPerPixel,
vImage_Flags(kvImageNoFlags))
guard error == kvImageNoError else {
return nil
}
}
Thanks