Why my CoreML model doesn't work?

Things I’d like to do

I want to classify the user's finger trajectory by using CoreML.


Things I’ve tried

I recorded the coordinates of the user’s finger trajectory 426 times. Based on that data, I created a classification model with Python (Random Forest), CoreML.


Result

Core ML: 27%

Python (Random Forest) : 95%


When classified by Core ML, I think that Random Forest is working, so

basically using Random Forest in Python should have the same performance, but almost all the data appears to be the same, and the result was 27%.

What I would like to ask

I thought that 27% classification accuracy by CoreML is too low. I am wondering if there is any problem with the implementation method. Would you please point out if there is something wrong?


What I did specifically


【Data collection】

1. Collect 4 gesture action trajectory data for 4 people (426)

2. Convert each trajectory data to an array of 1366 × 1366. (This array is composed of 0 or  1, 1 indicates the locus of the finger, 0 indicates the coordinates of other areas)

3. Because the locus data is different in each place, move the center of the trajectory data to the center of the array.


【Learning with Python】

4. As pre-processing for creating a learning model, convert it to one-dimensional data using (the number of elements is 1366 x 1366 = 1,865,956).

5. Based on the above data, we created a learning model using python's Random Forest   and confirmed that 95% accuracy is obtained when classifying the data.


【Learning with Swift】

6. Convert the data of #1 to one-dimensional data (number of elements: 1,865,956)

similarly to the process of #4

7. I converted the model of #5 to a model that can be executed on iOS by using CoreML and classified the data, the result was a remarkably low (27% accuracy).

Source code

Python file

import coremltools

coreml_model = coremltools.converters.sklearn.convert(clf, "gesture", "classifyGesture")
coreml_model.short_description = "Classify gestures"
coreml_model.input_description["gesture"] = "detected gesture"
coreml_model.output_description["classifyGesture"] = "crassifiedGesture"

coreml_model.save('Gesture.mlmodel')

Swift file

class ViewController: UIViewController {
    func createVec(key: Int) -> MLMultiArray {
       ...
        return vecsMl
    }//returns one-dimensional array
  
    func classifyGesture(key: Int) {
        if #available(iOS 11.0, *) {
            let vec = createVec(key: key)
            do {
                let prediction = try Gesture().prediction(gesture: vec).classifyGesture
                predictarray.append(prediction)
                if prediction == key {
                    recognizedDataCount += 1
                }
                allDataCount += 1
            } catch {
                print("error")
            }
        } else {
            print("error")
        }
    }

Replies

It's weird. I wonder if the python code has the pre-processing, while the swift code doesn't have. That's probably the reason of such huge difference.