Allow 16-bit RGBA image formats as input/output of MLModels

Starting in iOS 16 and macOS Ventura, OneComponent16Half will be a new scalar type for Images. Ideally, we would also like to use the 16-bit support for RGBA images. As of now, we need to make an indirection using MLMultiArray with Float (Float16 with the update) set as type and copy the data into the desired image buffer. Direct usage of 16-bit RGBA predictions in Image format would be ideal for some applications requiring high precision outputs, like models that are trained on EDR image data.

This is also useful when integrating Core ML into Core Image pipelines since CI’s internal image format is 16-bit RGBA by default. When passing that into a Neural Style Transfer model with (8-bit) RGBA image input/output type, conversions are always necessary (as demonstrated in WWDC2022-10027). If we could modify the models to use 16-bit RGBA images instead, no conversion would be necessary anymore.

Thanks for the consideration!

Also filed as FB10151072.

Also filed under FB10151072.

Thank you for the post. I saw a feature request come through feedbackassistant.apple.com and it seems to be related to this. Thanks for the feature request - we will look into this use case.

Allow 16-bit RGBA image formats as input/output of MLModels
 
 
Q