CoreML image orientation

We're having problems with images being sent to our CoreML model at unsupported image orientations. I'm setting the orientation parameter in VNImageRequestHandler, and I'm also using the OCR model so I know that the parameter is being sent correctly. At https://developer.apple.com/documentation/vision/classifying_images_with_vision_and_core_ml it says

Most models are trained on images that are already oriented correctly for display. To ensure proper handling of input images with arbitrary orientations, pass the image’s orientation to the image request handler.

but I'm not sure whether the rotation is supposed to happen automatically or whether they're implying that the model can read the orientation and rotate the image accordingly.

Accepted Reply

Nevermind, the issue was that I forgot to transform the regionOfInterest from AV coordinates to Vision Coordinates.

Replies

Nevermind, the issue was that I forgot to transform the regionOfInterest from AV coordinates to Vision Coordinates.