On iOS, there is a watch dog that kills processes on the GPU that run too long. I don't think there is much you can do to change the timing limit. The only way to prevent that is to improve the runtime of your algorithm. You could also try splitting up the work into multiple steps, if possible. E.g., by splitting up the images into regions and performing the regression on the regions first, and then once more on the results of step one.
Post
Replies
Boosts
Views
Activity
Thanks for the reply! I already filed FB14074014 for this. I also attached a small sample project now.
Did you already file Feedback for this issue?
Do you have any advanced tips setup? How are you calling Tips.configure()?
Via a private property on MLModelConfiguration: config.setValue(1, forKey: "experimentalMLE5EngineUsage") (1 = disabled, 0 = enabled if possible, 2 = force).
Very strange indeed. The workaround works, thanks!
Thanks for the reply, Quinn! Unfortunately, the issue still exists – even with the release version of 13.4.
Done: FB11650654
Thanks for your help!
@miric2 How did you find the -CustomRenderer tag? When I set it, Photos still doesn't show the HDR badge.
Thank you! Yes, this is filed under FB10151072.
Correct. But it also creates a Metal context by default if you just use CIContext(). You would need to have a Metal-based context anyways if you want to use Metal Core Image kernels.
To sample in the middle of the pixel. For instance, if your sampler would use linear sampling, the coordinate 1.0 would return a mix of the first and the second pixel since it's between those pixels. 1.5 would only return the color value of the second pixel.
Can you please try creating a larger image (16x16, for instance) and try again? I was also observing strange behavior (see above) for small EXR images. It would be interesting if these issues are related.
Any chance that Metal support is coming at some point?
Unfortunately, this is still the case (macOS 12 beta 6 + Xcode 13 beta 5)...