You can run it with an iPhone simulator.
Post
Replies
Boosts
Views
Activity
I think the function for observe spatialCaptureDiscomfortReasons is not ready currently that it's in beta.
Maybe wait for ios 18 public release version.
I fire the bug with the same title that its FB number is:
FB14829880
Thank you very much for your response.
My use case involves using other applications on the iPad while having my app displayed on an external monitor. Based on the documentation you provided, I understand that the recommended Role for a UIScene on an external display is windowExternalDisplayNonInteractive. However, my requirement is to move the main application to the external display. The documentation advises against changing the windowApplication role. Could you explain the considerations behind this recommendation?
I'm considering whether it would be feasible to change the role to windowExternalDisplayNonInteractive when moving the app to the external display, and then change it back to windowApplication when returning to the iPad. Is this approach viable? Or do you have any other suggestions for achieving this requirement?
Is there any update for this issue? iOS 18.0 Release version or 18.1 beta seems not to fix this issue.
Thank you for your insight. That's a good point about the potential thermal throttling issue. I'm curious about how we can maintain efficient execution while avoiding thermal throttling. Do you have any recommendations for optimizing the prediction runs to balance performance and thermal management?
Hi, I think I do the async wrong. My app captures screen with ScreenCaptureKit, and using Core ML model to convert its style then Draw on Metal View. I think this situation might not be able to use async prediction to get the result due to the screenshots have their order.
Is it still possible to speed up the prediction?
I update a version that runs without crash. But the prediction speed is almost the same as sync version API. The createFrameAsync is called from ScreenCaptureKit stream.
private func createFrameAsync(for sampleBuffer: CMSampleBuffer ) {
if let surface = getIOSurface(for: sampleBuffer) {
Task {
do {
try await runModelAsync(surface)
} catch {
os_log("error: \(error)")
}
}
}
}
func runModelAsync(_ surface: IOSurface) async throws {
try Task.checkCancellation()
guard let model = mlmodel else {return}
do {
// Resize input
var px: Unmanaged<CVPixelBuffer>?
let status = CVPixelBufferCreateWithIOSurface(kCFAllocatorDefault, surface, nil, &px)
guard status == kCVReturnSuccess, let px2 = px?.takeRetainedValue() else { return }
guard let data = resizeIOSurfaceIntoPixelBuffer(
of: px2,
from: CGRect(x: 0, y: 0, width: InputWidth, height: InputHeight)
) else { return }
// Model Prediction
var results: [Float] = []
let inferenceStartTime = Date()
let input = model_smallInput(input: data)
let prediction = try await model.model.prediction(from: input)
// Get result into format
if let output = prediction.featureValue(for: "output")?.multiArrayValue {
if let bufferPointer = try? UnsafeBufferPointer<Float>(output) {
results = Array(bufferPointer)
}
}
// Set Render Data for Metal Rendering
await ScreenRecorder.shared
.setRenderDataNormalized(surface: surface, depthData: results)
} catch {
print("Error performing inference: \(error)")
}
}
Since Async prediction API cannot speed up the prediction, is there anything else I can do? The prediction time is almost the same on macbook M2 Pro and macbook M1 Air!
Yes
Yes. And I print the timestamp after commitFrame and it shows sequential. So I think that is so strange and I cannot get where is the problem
I notice that each time when calling CoreML predict, the processing time varies, and the difference can exceed 5ms. Is this normal?
Additionally, I've observed that when P-CPU utilization increases, Neural Engine utilization also increases, which in turn reduces the prediction time. Is this behavior also normal?
Hi, thanks for your explanation!
Ideally, I would like to draw as many captured frames as possible. However, I understand that if processing speed isn’t sufficient, I may need to drop some frames to keep up with real-time rendering. That said, my goal is definitely not to draw only the latest frame, as I want to preserve as much of the original capture data as possible.
Let me know if this aligns with what you’re asking!
draw(in:) is a callback from MTKViewDelegate, and it’s called whenever the OS notifies an update. It’s driven by the OS.