L2 cache for core image filter?

Hi,

I'm developing an app to process the bokeh blur. I knew there're many ways to get the round bokeh with fast algorithm. But I still want to use the standard way to process the image, then I can get any shape of bokeh. The major processing code is below,

for (i = 0; i < radius; i ++)

for (j = 0; j < radius; j ++)

get weight

get pixel color



With up for-next, this filter will take much time in gettting color pixel by pixel. If the radius is 100, then it will run 10,000 times for every pixel. Before getting touch the iOS, I use the standard C language to process those codes. I can cache the image buffer to the L2 or L3 cache in feature phone, so it loads the picture and pixels fastly as I want.


Is there any way to process the filter as I want? Or is there any way to get repeated pixels fastly?

Thank you.

Replies

I'm not sure whether this applies to your case, but you should check if your filter is separable. This drastically reduces the number of operations per pixel.