Posts

Post marked as solved
3 Replies
I was able to use other normal fonts from the list of apple standard fonts. Helvetica I think. I think it is just the standard user interface fonts that do not work. let fontName = "Helvetica" let meshResource = MeshResource.generateText(         message,         extrusionDepth: 0.001,         font: .init(descriptor: .init(name: fontName, size: size), size: size),         containerFrame: .init(x: -width/2, y: -height/2, width: width, height: height),         alignment: .center,         lineBreakMode: .byWordWrapping)
Post not yet marked as solved
5 Replies
Sometimes it is easier to rub out confusing things in an image editor. So you could have a picture with a cat and other animals. Rub out the other animals in the cat folder. You can have a folder of images that contain none clasified animals, rub out all the classified ones. Alternativly you label all the anamals and programatically remove labels for the animals you are not yet classifying before training. Its best to have balanced data with about the same number of each thing you are detecting. To add a new classification you may need to scrape more images containg that animal to make it balanced. When you have enough change the filter so it does not remove that label. Another trick for balancing is to use augmentation on enough images to make it balanced. So, for example, flip enough images of the the one that you are short of to make up the difference. (programatically flip the labels too)
Post not yet marked as solved
7 Replies
Oriantation is how the data is mapped to an image the correct way up. If it looks right then I think MakeML should interpret it correctly. The data looks OK to me. The "Pixel Height" is less than the width so the data is "landscape" but it is rotated. So thats fine. Its a portrate image. In the phone you need to do the same thing with your frame grab. (If this is the issue) If your width is longest then set Orialtation to be a rotation. Is in function exifOrientationFromDeviceOrientation() in the example I think.
Post not yet marked as solved
1 Replies
Reading up on this I think I may have two issues. There is a bug in my balancing logic in my augmentation code so my training data is unbalanced. Also this issue arises if the batch size is too small. This results in training using part of the training set that will, statistically, be unbalanced. The memory usage is not that high so I could increase the batch size but I cannot see a way to do this in the Create ML App. I will try and get rid of the bugs and report back.It looks like reducing the training data set size helps and also reducing the imbalance.
Post not yet marked as solved
7 Replies
If you have trained with all the objects in one oriantation this could imply the image is being interpreted wrongly. You may need to fiddle with the EXIF oriantation. For testing hard wire it to a value like CGImagePropertyOrientation.up. It can be changed with device oriantation but I hard wired it. Is your training data portrate also? I think it helps if it matches. If you detect with the app image with a different aspect ratio to the trained image i think it kind of works. However if you have trained with one oriantation and the oriantations do not match it will probbably not work well. Does MakeML have a test app? Is that working?
Post not yet marked as solved
7 Replies
I had an issue with this example code using my own model. It turns out that imageCropAndScaleOption needed to be set to .scaleFill in the iOS code so that the detection was done over the whole image. By default the detection was set up to crop a square from the middle. Anything outside the square did not get considered for detection. Also the bound boxes used a strainge coorduinate sytem that ment they did not line up with the image until the fix.This is from my own code but you should be able to find the documentation and work out where it goes if the var name is different.objectRecognition.imageCropAndScaleOption = .scaleFillI could see imageCropAndScaleOption was wrong in xCode before making the change.I think ideally you train with the same aspect ratio or with a mix of landscape and protrate images if you need to support both. If the training data has no typical oriantation, for example a perfect plan view of objects, I think you can fix the oriantation. It gets a bit confusing. For my projects I have only supported one oriantation to make it easier.