Posts

Post not yet marked as solved
0 Replies
306 Views
I'm using Vision framework for text recognition and detecting rectangles in the image. For that, I'm using VNRecognizeText & VNDetectRectangles features of the Vision. In macOS and iOS results, I found slight difference in the boundingBox coordinates of the text and the rectangles detected for the same image. Is this expected? Can we do anything to make the results identical? Also, on macOS, when I'm using same features of Vision from python (using pyobjc-framework-Vision package), there also i'm getting slightly different results.
Posted Last updated
.
Post not yet marked as solved
1 Replies
416 Views
I'm using VisionKit framework for text recognition and detecting rectangles in the image. For that, I'm using VNRecognizeText & VNDetectRectangles features of the VisionKit. In macOS and iOS results, I found slight difference in the boundingBox coordinates of the text and the rectangles detected for the same image. Is this expected? Can we do anything to make the results identical? Also, on macOS, when I'm using same features of VisionKit from python (using pyobjc-framework-Vision package), there also i'm getting slightly different results.
Posted Last updated
.