As @kerfuffle mentioned you can use the
imageCropAndScale property on Vision
VNCoreMLRequest to control how Vision scales inputs to match the requirements of the Core ML model. See:
https://developer.apple.com/documentation/vision/vncoremlrequest/2890144-imagecropandscaleoptionIf you are using Core ML directly, you can also use some of the
MLFeatureValue constructors to help with resizing. Particularly the constructors which take a
CGImage or image
URL allow you to specify the desired size in pixels or or via the corresponding
MLImageConstraint for the input feature. These constructors also take an option dictionary in which you can supply
.cropAndScale as the key and a
VNImageCropAndScaleOption as the value. See:
https://developer.apple.com/documentation/coreml/mlfeaturevalue/3200161-initAlso note that with the Xcode 12 beta, the code generated interface for your model will allow your image inputs to be supplied as
CGImages or
URL and it will do a default resizing for you via this
MLFeatureValue mechanism.