Posts

Post not yet marked as solved
0 Replies
923 Views
I am trying to train an Object Detection Model using transfer learning with a small dataset ( roughly 650 Images and two classes ) using Create ML v2.0 (53.2.2) with prefer external GPU checked. I am using a 2018 Mac mini 3.2 ghz I7 16 gb of ram and AMD Radeon Pro 580 eGPU. The problem I am having is that I can only do about 3500 iterations before I run out of memory and I need to pause the training. When I resume training my Loss increases again and it takes a while for it to get back down to where it was before I paused. So I am wondering if there is a better way to setup the hardware or any other suggestions so I can get through all of the iterations without having to pause. I don't recall having this issue with Create ML v1.0, so any suggestions would be appreciated.
Posted
by bbarry.
Last updated
.
Post not yet marked as solved
0 Replies
461 Views
I have a pre-recorded video loaded into Xcode for testing and when I run the trajectory request on the video playing in the simulator the observations I get back are inconsistent and vary with every build and run. For example one session will produce no observations and the next time I run it will produce multiple observations. Are there any suggestions or tips to improving the consistency of the observations?
Posted
by bbarry.
Last updated
.
Post not yet marked as solved
0 Replies
538 Views
When I train an object detection model using transfer learning and use it in an app on the device I seem to get a lot of false positive predictions with extremely high confidence that I do not get when I train the same model as a full network model. For example, the model is looking for a shoe and it will generate a false positive on an image of a blank wall with a confidence of over 95%. Yet when I test the model and drag an image of the same wall into preview in Xcode it correctly classifies the image. In fact by simply moving the camera so it goes out of focus for a brief second always generates an incorrect prediction. Yet none of these issues are present when using the model trained using full network. I would prefer to use transfer learning until I am able to generate enough training data, So I have two questions. Is there a reason for this? Is there a way to prevent this?
Posted
by bbarry.
Last updated
.
Post not yet marked as solved
0 Replies
491 Views
According to the Building an Action Classifier Data Source Article in the Create ML Documentation - https://developer.apple.com/documentation/createml/creating_an_action_classifier_model/building_an_action_classifier_data_source ,the start and end times can be a "A string of hours, minutes, and seconds, for example: 05:01:03", however when I load the training data into Create ML I get the following error " Unexpected value is not Double or Int in 'end' csv column". What I have in the CSV file is this video1.mp4,test_label,0:01:00,0:01:01 Is there a trick to getting this to work or can it only be a int or a double?
Posted
by bbarry.
Last updated
.
Post marked as solved
2 Replies
665 Views
I have set the regional interest on my VNDetectContoursRequest but the observations I am getting back all seem to be outside of the region of interest. So my question is if the region of interest is set should I only get back observations in the region or should I still get back all of the contour observations for the image?
Posted
by bbarry.
Last updated
.
Post not yet marked as solved
3 Replies
715 Views
I have a VNDetectTrajectoriesRequest private lazy var pathRequest: VNDetectTrajectoriesRequest = { return VNDetectTrajectoriesRequest(frameAnalysisSpacing: .zero, trajectoryLength: 5, completionHandler: trajectoryHandler)}() and the request handler inside the captureoutput for the camera like so. do {             try requestHandler.perform([pathRequest])         } catch {             print("Unable to perform the request: \(error).")         } When I run the app I see the completion handler get called 5 times with empty observations and then nothing . Even if a ball is rolled through the video being captured the completion handler never shows an observation. func trajectoryHandler(request: VNRequest, error: Error?) {         print("have trajectory observation")         guard let observations =                 request.results as? [VNTrajectoryObservation] else { return }         print(observations)     } Any thoughts on what might be going on?
Posted
by bbarry.
Last updated
.
Post marked as Apple Recommended
1.3k Views
I was under the impression that vision handles the scaling of images, am I to understand that it does not? If I had a Core ML model that had an image input of 416 x 416 and the image being passed into the model was 1920 x 1080 should I be using Core Image to scale that to the size that the input is looking for?
Posted
by bbarry.
Last updated
.
Post marked as solved
2 Replies
524 Views
I have a storyboard with a Stackview in a Stackview but cannot seem to get some of the fields to show up correctly. I have tried adjusting the Content Hugging and Compression values but adjusting those only pushes another field into the wrong position. I cannot seem to find the magic sauce to get it to work so any suggestions would be helpful.
Posted
by bbarry.
Last updated
.
Post not yet marked as solved
0 Replies
480 Views
Working on using vision on macOS and tracking objects in a video playing using AVKit. My Dected object observation never seems to change, it is always where the initial selection of the object was. I am wondering if there is a way to see what Vision thinks is the object it is tracking? Additionaly, from my understanding the coordinate systems from Vision/AVKit and NSView should be the same is that correct? Any thoughts on this would be helpful.Thanks,B
Posted
by bbarry.
Last updated
.