Posts

Post not yet marked as solved
7 Replies
You may have a mistake here when saying "The "Pixel Height" is less than the width so the data is "landscape" but it is rotated"-> both images I sent, Pixel Height is higher than the width. I also try to fix the return value of exifOrientationFromDeviceOrientation() to everything the enum could be (.up, .down,etc) but the phone in portrait mode cannot be able to detect tower with correct bounding box. I set a breakpoint to see the pixelBuffer in the file VisionObjectRecognitionViewController, function captureOutput() and I found out that the image presented by pixelBuffer do not display normally in portrait, it display 90 degrees counter clockwise. Then I tried to build new ML model with all input images rotated 90 degrees counterclockwise. After exporting and push the ML model into BreakfastFinder sample, the application now can detect in portrait mode with the correct bounding box. And also, landscape mode detection doesn't work anymore. Do you think it's normal? I don't know how to explain this situation. That's weird.
Post not yet marked as solved
7 Replies
MakeML convert my image to this information: https://ibb.co/QD4d3NXAnd my original images taken from iPhone in portrait mode with EXIF orientation 6 (90 degree CCW): https://ibb.co/7bYqMzwI also tried to convert images into EXIF orientation 1 (normal) before passing images to MakeML to create images & annotations dataset: https://ibb.co/0Xk8W1bAll doesn't work. Which step could possible be the problem here?1. Original images? 2. Converted images and annotations? 3. Turi Create problem? Here is my code:import turicreate as tc #Load data images = tc.load_images('/Users/kid/Desktop/TowerIp6LocationOnDataset/images') annotations = tc.SFrame('/Users/kid/Desktop/TowerIp6LocationOnDataset/annotations.csv') data = images.join(annotations) model = tc.object_detector.create(data, max_iterations=600) #evaluation model.evaluate(testdata) # This process results mean_average_precision_50 ~ 0.85 -> very high #export model.export_coreml('/Users/kid/Desktop/TowerIp6LocationOn.mlmodel')4. CoreML problem? 5. Breakfast Finder source code problem?
Post not yet marked as solved
7 Replies
Thanks for your answer. Howerver, I found the scale option just make to bounding box more precisely draw onto the object. The orientation problem still exists for me. I use MakeML to label images, it exports all images to 150X200. All the images taken in portrait mode. I just want the model be able to detect in portrait mode but it didn't. For more information, about the scaleFit option's purpose, it is mentioned here: https://github.com/apple/turicreate/issues/1016