Object detection for dice not working

Hi from France !

I'm trying to create a model for dice detection.

  • I've take about 100 photos of dice on the same side (1 point).

Are-my bounding boxes good ? should I take the whole dice ?

  • I launched the trainning, it seems to work well :

  • Then in the Evaluation tab, the values seems not great but not bad : I/U 84% Varied I/U 44%

  • The validation scope is very low :

  • In the preview tab, no matter what image I give to it, I have no detection

What am I missing ? What should I improve ?

Hi,

I'm share my improvements. I've added some training data from a set found on the internet with very different colors and sizes. The results are better now, but not as I expect :

Weird result :

Good but incomplete

Nothing :

Good & bad :)

Stats

Could you share the annotation format you are using? The bounding boxes you show are to the edges of the die in the orientation they're at, but the format for the object detector requires a rectangular bounding box, parallel to the x and y axes. Also be mindful of the coordinate system you're using. The "Good but incomplete" version looks like it's a good start, but if you are using an annotation format that anchors to the upper left of the bounding boxes and the app is expecting the center of the bounding box, your model is going to be training on different areas than you expect.

See https://developer.apple.com/documentation/createml/building-an-object-detector-data-source for more details on setting up your datasource.

Hi ! Thanks for your answer.

Indeed, on some dice I've rotate the bounding box in the RectLabel app, but after exporting, the rotation is removed :

  {
    "image": "IMG_5653.HEIC",
    "annotations": [
      {
        "label": "1",
        "coordinates": {
          "y": 2103,
          "x": 1571,
          "width": 952,
          "height": 962
        }
      }
    ]
  },

Here is on the left, the real generated bounding box and on the right the visualization in RectLabel, so the boundingbox looks fine.

but if you are using an annotation format that anchors to the upper left of the bounding boxes and the app is expecting the center of the bounding box, your model is going to be training on different areas than you expect.

I'm not sure what is my anchor point with the above code. For now I don't have any app, I'm just trying to train the model.

What's the next step ?

  • Should I add more images for the training ?
  • There is a "transfert learning" option, should I use it ?

Actually I tried this option, the loss is very low (0.01) but the actual detection is not better.

Object detection for dice not working
 
 
Q