MLMultiArray to UIImage or Mat

I am working on mlmodel which output is MLMultiArray but in fact, the model produces an image It is a grayscale mask 224x224.

My App is a mixture of OpenCV, Objective-C, C++


I am working in Objective-C. I am trying to translate MlMultiArray to Mat matrix for further OpenCV processing.


My approach is to perform conversion MLMultiArray -> UIImage -> Mat.

There is a convenience method in OpenCV to obtain Mat from UIImage but

I am stuck on the MLMultiArray -> UIImage portion.

I see various posts around forums that partially resolve the problem but I have no idea how to put these pieces together. I am good at Deep Learning but not exactly skilled in Objective-C / Swift


Here are some links that get close to the problem but none shows the whole formula.


https://stackoverflow.com/questions/47828706/how-to-access-elements-inside-mlmultiarray-in-coremlhttps://developer.apple.com/documentation/coreml/mlmultiarray/2879222-strides?language=objchttps://developer.apple.com/documentation/coreml/mlmultiarray/2879231-objectforkeyedsubscript?language=objc


Should I use a loop to go over all elements of MLMultiArray or is there a more efficient way to transform the array.

Maybe there is a direct way to convert MLMultiArray into OpenCV Mat object?


here is a mysterious link MLMultiArray -> Mat through dataPointer but I couldn't find anything more: https://stackoverflow.com/questions/47828706/how-to-access-elements-inside-mlmultiarray-in-coreml


Please help, I am chasing my tail on various forums.


Thanks in advance

Replies

Have you achieve your goal? I also encountered this problem. Can we communicate via email? Mine is qmsy122011@gmail.com . I am struggling how to convert array MLMultiarray into 2d array or matrix which I can manipulate more easily, like the numpy for matrix in python.