Post

Replies

Boosts

Views

Activity

Reply to How to make iPad app move to display show full-screen and center align
Thank you very much for your response. My use case involves using other applications on the iPad while having my app displayed on an external monitor. Based on the documentation you provided, I understand that the recommended Role for a UIScene on an external display is windowExternalDisplayNonInteractive. However, my requirement is to move the main application to the external display. The documentation advises against changing the windowApplication role. Could you explain the considerations behind this recommendation? I'm considering whether it would be feasible to change the role to windowExternalDisplayNonInteractive when moving the app to the external display, and then change it back to windowApplication when returning to the iPad. Is this approach viable? Or do you have any other suggestions for achieving this requirement?
Sep ’24
Reply to Core ML Async API Seems to Not Work Properly
I update a version that runs without crash. But the prediction speed is almost the same as sync version API. The createFrameAsync is called from ScreenCaptureKit stream. private func createFrameAsync(for sampleBuffer: CMSampleBuffer ) { if let surface = getIOSurface(for: sampleBuffer) { Task { do { try await runModelAsync(surface) } catch { os_log("error: \(error)") } } } } func runModelAsync(_ surface: IOSurface) async throws { try Task.checkCancellation() guard let model = mlmodel else {return} do { // Resize input var px: Unmanaged<CVPixelBuffer>? let status = CVPixelBufferCreateWithIOSurface(kCFAllocatorDefault, surface, nil, &px) guard status == kCVReturnSuccess, let px2 = px?.takeRetainedValue() else { return } guard let data = resizeIOSurfaceIntoPixelBuffer( of: px2, from: CGRect(x: 0, y: 0, width: InputWidth, height: InputHeight) ) else { return } // Model Prediction var results: [Float] = [] let inferenceStartTime = Date() let input = model_smallInput(input: data) let prediction = try await model.model.prediction(from: input) // Get result into format if let output = prediction.featureValue(for: "output")?.multiArrayValue { if let bufferPointer = try? UnsafeBufferPointer<Float>(output) { results = Array(bufferPointer) } } // Set Render Data for Metal Rendering await ScreenRecorder.shared .setRenderDataNormalized(surface: surface, depthData: results) } catch { print("Error performing inference: \(error)") } } Since Async prediction API cannot speed up the prediction, is there anything else I can do? The prediction time is almost the same on macbook M2 Pro and macbook M1 Air!
Oct ’24
Reply to MultiThreaded rendering with actor
Hi, thanks for your explanation! Ideally, I would like to draw as many captured frames as possible. However, I understand that if processing speed isn’t sufficient, I may need to drop some frames to keep up with real-time rendering. That said, my goal is definitely not to draw only the latest frame, as I want to preserve as much of the original capture data as possible. Let me know if this aligns with what you’re asking!
Nov ’24