I'm training a machine learning model in PyTorch using YOLOv5 from Ultralytics.
CoreMLTools from Apple is used to convert the PyTorch (.pt) model into a CoreML model (.mlmodel).
This works fine, and I can use it in my iOS App, but I have to access the prediction output of the Model "manually".
The output shape of the model is MultiArray : Float32 1 × 25500 × 46 array
.
From the VNCoreMLRequest
I receive only VNCoreMLFeatureValueObservation
from this I can get the MultiArray and iterate through it, to find the data I need.
But I see that Apple offers for Object Detection models VNRecognizedObjectObservation
type, which is not returned for my model.
What is the reason why my model is not supported to return the VNRecognizedObjectObservation
type? Can I use CoreMLTools to enable it?