Many image processing models shift and scale the final layers of a network to go from sigmoid or tanh output [0,1] or [-1,1] back into 8-bit RGB values [0-255]. This is usually done with a simple scale/shift layer with non-learned parameters (125 and 255.0/2 are constants):
rgb = x * 125 + 255.0/2
This does not seem possible with the current coreML tools.
Approaches I have tried:
1) The current version of the coremltools (v0.4) does not support lambda layers, which is the straightforward way to do this tranform into RGB space as the final keras layer.
2) Parameters to the coremltools converter such as image_scale and rgb_bias are for the inputs. I am trying to go from unit values back into pixel space for the outputs of the model.
3) Applying the transform after receiving the results of the CoreML model. This is less than desirable since it breaks the abstraction and drags the developer back down into applying a per pixel transform that should be part of the model instead of just using the image output.
Are there better approaches to handling pixel based model outputs? This kinda of output mapping is common in image-based neural networks and it would be nice to have a clean mechanism for handling.