Create ML - Additional Training for LED/LCD Characters on Image?

I've been using VNRecognizeTextRequest, VNImageRequestHandler, VNRecognizedTextObservation, and VNRecognizedText all successfully (in Objective C) to identify about 25% of bright LED/LCD characters depicting a number string (arranged in several date formats) on a scanned photograph.

I first crop to the constant area where the characters are located, and do some Core Image filters to optimize display of the characters in black and white to remove background clutter as much as possible.

Only when the characters are nearly perfect, not over- or under-exposed, do I get a return string with all the characters. As an example, an LED image of 93 5 22 will often return 93 S 22, or a 97 4 14 may return 97 Y 14. I can easily substitute the letters with commonly confused numbers, but I would prefer to raise the text recognition to something more than 25% (it will probably never be greater than 50%-75%.

So, I thought I could use Create ML to create a model (based on the text recognition model Apple has already created), with training folders labeled with each numeric LED/LCD characters 1, 2, 3..., blurred, with noise, over/under exposed, etc. and improve the recognition.

Can I use Create ML to do this? Do I use Image Object Detection, or is it Text Classification to return a text string with something like "93 5 22" that I can manipulate later with Regular Expressions?

Create ML - Additional Training for LED/LCD Characters on Image?
 
 
Q