I've looked in multiple places online, including here in the forums where a somewhat similar question is asked (and never answered :( ) but i'm going to ask anyway:
vImage, Metal Performance Shaders, and Core Image all have a big overlap in the kinds of operations they perform on image data. But none of supporting materials (documentation, WWDC session videos, help) ever seem to bother with paying much heed to even the existence of the others when talking about themselves.
For example, Core Image talks about how efficient and fast it is. MPS talks about everything being "hand rolled" to be optimized for the hardware its running on. Which means yes, fast and efficient. and vImage talks about being fast and..yup, energy-saving.
But I and other have very little to go on as to when vImage makes sense over MPS. Or Core Image. If I have a large set of images and I want to get the mean color value of each image and i want to equalize or adjust the histogram of each, or perform some other color operation on each in the set, for example, which is best?
I hope someone from Apple -- preferably multiple people from the multiple teams that work on these multiple technologies -- can help clear some of this up?