Update: I have managed to replicate the functionality of our existing custom mirror kernels using cropping, affine transform, and compositing operations. Seems to work so far at least, with large (> 4096) still images that would otherwise get tiled. There's definitely a noticeable performance hit (temporary) with the still image. I haven't tested with a movie yet, but I'm assuming that will also suffer, perhaps constantly with each new frame. Therefore, I'd still be interested to know if there's a way to keep the more-optimized kernel approach with the larger images/movies if at all possible.
Post
Replies
Boosts
Views
Activity
Thanks for your reply. I have indeed reviewed the programming guide (many times over the years), and I'm aware of how to provide an ROI function. The guide could really do with an update (and some fixes for errors) and better examples with more detailed explanation. So much of this stuff we've had to figure out by trial and error.
We've been able to get away most of the time without supplying much in the way of custom ROIs for a long time, just returning the destRect or destRect inset by -1. But we do have some custom ROIs when using more than one sampler where the two samplers may be different sizes.
After first reading your suggestion to reduce the ROI to only the portion being mirrored, I could see how it would be possible to say that some mirroring actions would require only some smaller portion of the input image (with limited ROI) and that this could help in some cases, but I figured this would only help in those cases where the entire image was not needed (the full image mirror flips) and also where the ROI itself would not exceed the 4096 pixel limit.
I've edited our code to try this out and the results are good up to a point. I've kept the method of using the affine transform for the full image mirror flips, so the following comments only relate to the mirroring of a portion of the source image - either half or a quarter.
I've taken your suggestion and created a custom ROI that is the portion of the source image that is being mirrored - left half, top half, bottom-right quarter, etc. This works fine until the source texture gets too big, and logically, the point at which it gets too big depends on how large the ROI needs to be. e.g. the ROI for the left to right mirror is larger than the ROI for a bottom-right quarter mirror and thus the quarter mirror can handle a larger source image. At the point where the source texture is too large, what happens is that the entire rendering loop (running at ~60Hz) of our app stalls. I'm assuming this is because the texture being passed into the CI filter chain is so large that it doesn't return fast enough for our rendering to complete in time and this just snowballs. Because of this, I've made to decision to use the affine transforms instead of a kernel with custom ROI whenever the source image size is > 8192 in either dimension.
But I do want to double-check my assumptions of what is happening regarding the use of "destRect".
I know the clamping with a larger image with our original code is because the tiling means that the entire source is not available for each "pass" through the kernel. And that supplying a custom ROI means that the correct portion of the source IS available. I just want to check my understanding that when you use "destRect" in the ROI you're always going to get a tiled rect, assuming the image is large enough to cause tiling? I'd have to say, embarrassingly, that when I converted all of our effects from using old methods for setting the ROI (using setROISelector) and started using applyWithExtent, I found examples somewhere that used destRect, and I followed along, clearly not fully appreciating what impact it could have. It would appear that in some cases this isn't right at all, and we want to use the entire source (CGRectInfite I guess) or like with this mirroring effect, the portion of the image that is being mirrored.